sha
stringlengths
40
40
text
stringlengths
1
13.4M
id
stringlengths
2
117
tags
sequencelengths
1
7.91k
created_at
stringlengths
25
25
metadata
stringlengths
2
875k
last_modified
stringlengths
25
25
arxiv
sequencelengths
0
25
languages
sequencelengths
0
7.91k
tags_str
stringlengths
17
159k
text_str
stringlengths
1
447k
text_lists
sequencelengths
0
352
processed_texts
sequencelengths
1
353
tokens_length
sequencelengths
1
353
input_texts
sequencelengths
1
40
51e177a30298a01c8911d09345659b2a12ecd34b
It is what it says on the tin, 2k websites that don't run javascript (aka: probably really old, and simple!). Filtering needs to be done to take out the defunct sites and error messages. I don't expect this is a hard task.
crumb/js-free-sites
[ "region:us" ]
2023-12-27T09:05:54+00:00
{"dataset_info": {"features": [{"name": "content", "dtype": "string"}, {"name": "markdown", "dtype": "string"}, {"name": "url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 82642012, "num_examples": 2067}], "download_size": 34862377, "dataset_size": 82642012}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-12-27T09:10:06+00:00
[]
[]
TAGS #region-us
It is what it says on the tin, 2k websites that don't run javascript (aka: probably really old, and simple!). Filtering needs to be done to take out the defunct sites and error messages. I don't expect this is a hard task.
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
8ea1f1dfafd8f548e5ff75c4babd9cf33be6a3bd
# ECMWF's ERA5, HRES, (and fake) data, formatted for DeepMind GraphCast Original files are from this Google Cloud Bucket: https://console.cloud.google.com/storage/browser/dm_graphcast This repo contains both the `dataset` and `stats` files needed for GraphCast inference. ## License and Attribution ECMWF data products are subject to the following terms: 1. Copyright statement: Copyright "© 2023 European Centre for Medium-Range Weather Forecasts (ECMWF)". 2. Source www.ecmwf.int 3. Licence Statement: ECMWF data is published under a Creative Commons Attribution 4.0 International (CC BY 4.0). https://creativecommons.org/licenses/by/4.0/ 4. Disclaimer: ECMWF does not accept any liability whatsoever for any error or omission in the data, their availability, or for any loss or damage arising from their use. ## Usage Use the Huggingface Hub file system to load files. The `datasets` library doesn't support netCDF files yet. ```python from huggingface_hub import HfFileSystem, hf_hub_download import xarray fs = HfFileSystem() files = [ file.rsplit("/", 1)[1] for file in fs.ls("datasets/shermansiu/dm_graphcast_datasets/dataset", detail=False) ] local_file: str = hf_hub_download(repo_id="shermansiu/dm_graphcast_datasets", filename=f"dataset/{files[0]}", repo_type="dataset") with open(local_file, "rb") as f: example_batch = xarray.load_dataset(f).compute() ``` ## Citation - Paper: https://www.science.org/doi/10.1126/science.adi2336 - Preprint: https://arxiv.org/abs/2212.12794 ``` @article{ doi:10.1126/science.adi2336, author = {Remi Lam and Alvaro Sanchez-Gonzalez and Matthew Willson and Peter Wirnsberger and Meire Fortunato and Ferran Alet and Suman Ravuri and Timo Ewalds and Zach Eaton-Rosen and Weihua Hu and Alexander Merose and Stephan Hoyer and George Holland and Oriol Vinyals and Jacklynn Stott and Alexander Pritzel and Shakir Mohamed and Peter Battaglia }, title = {Learning skillful medium-range global weather forecasting}, journal = {Science}, volume = {382}, number = {6677}, pages = {1416-1421}, year = {2023}, doi = {10.1126/science.adi2336}, URL = {https://www.science.org/doi/abs/10.1126/science.adi2336}, eprint = {https://www.science.org/doi/pdf/10.1126/science.adi2336}, abstract = {Global medium-range weather forecasting is critical to decision-making across many social and economic domains. Traditional numerical weather prediction uses increased compute resources to improve forecast accuracy but does not directly use historical weather data to improve the underlying model. Here, we introduce GraphCast, a machine learning–based method trained directly from reanalysis data. It predicts hundreds of weather variables for the next 10 days at 0.25° resolution globally in under 1 minute. GraphCast significantly outperforms the most accurate operational deterministic systems on 90\% of 1380 verification targets, and its forecasts support better severe event prediction, including tropical cyclone tracking, atmospheric rivers, and extreme temperatures. GraphCast is a key advance in accurate and efficient weather forecasting and helps realize the promise of machine learning for modeling complex dynamical systems. The numerical models used to predict weather are large, complex, and computationally demanding and do not learn from past weather patterns. Lam et al. introduced a machine learning–based method that has been trained directly from reanalysis data of past atmospheric conditions. In this way, the authors were able to quickly predict hundreds of weather variables globally up to 10 days in advance and at high resolution. Their predictions were more accurate than those of traditional weather models in 90\% of tested cases and displayed better severe event prediction for tropical cyclones, atmospheric rivers, and extreme temperatures. —H. Jesse Smith Machine learning leads to better, faster, and cheaper weather forecasting.}} ```
shermansiu/dm_graphcast_datasets
[ "language:en", "license:cc-by-4.0", "weather-forecasting", "climate", "arxiv:2212.12794", "region:us" ]
2023-12-27T09:09:21+00:00
{"language": ["en"], "license": "cc-by-4.0", "pretty_name": "ECMWF's ERA5, HRES, (and fake) data, formatted for DeepMind GraphCast", "tags": ["weather-forecasting", "climate"], "configs": [{"config_name": "source-era5_date-2022-01-01_res-0.25_levels-13_steps-01", "data_files": "dataset/source-era5_date-2022-01-01_res-0.25_levels-13_steps-01.nc"}, {"config_name": "source-era5_date-2022-01-01_res-0.25_levels-13_steps-04", "data_files": "dataset/source-era5_date-2022-01-01_res-0.25_levels-13_steps-04.nc"}, {"config_name": "source-era5_date-2022-01-01_res-0.25_levels-13_steps-12", "data_files": "dataset/source-era5_date-2022-01-01_res-0.25_levels-13_steps-12.nc"}, {"config_name": "source-era5_date-2022-01-01_res-0.25_levels-13_steps-12", "data_files": "dataset/source-era5_date-2022-01-01_res-0.25_levels-13_steps-12.nc"}, {"config_name": "source-era5_date-2022-01-01_res-0.25_levels-37_steps-01", "data_files": "dataset/source-era5_date-2022-01-01_res-0.25_levels-37_steps-01.nc"}, {"config_name": "source-era5_date-2022-01-01_res-0.25_levels-37_steps-04", "data_files": "dataset/source-era5_date-2022-01-01_res-0.25_levels-37_steps-04.nc"}, {"config_name": "source-era5_date-2022-01-01_res-0.25_levels-37_steps-12", "data_files": "dataset/source-era5_date-2022-01-01_res-0.25_levels-37_steps-12.nc"}, {"config_name": "source-era5_date-2022-01-01_res-1.0_levels-13_steps-01", "data_files": "dataset/source-era5_date-2022-01-01_res-1.0_levels-13_steps-01.nc"}, {"config_name": "source-era5_date-2022-01-01_res-1.0_levels-13_steps-04", "data_files": "dataset/source-era5_date-2022-01-01_res-1.0_levels-13_steps-04.nc"}, {"config_name": "source-era5_date-2022-01-01_res-1.0_levels-13_steps-12", "data_files": "dataset/source-era5_date-2022-01-01_res-1.0_levels-13_steps-12.nc"}, {"config_name": "source-era5_date-2022-01-01_res-1.0_levels-13_steps-20", "data_files": "dataset/source-era5_date-2022-01-01_res-1.0_levels-13_steps-20.nc"}, {"config_name": "source-era5_date-2022-01-01_res-1.0_levels-13_steps-40", "data_files": "dataset/source-era5_date-2022-01-01_res-1.0_levels-13_steps-40.nc"}, {"config_name": "source-era5_date-2022-01-01_res-1.0_levels-37_steps-01", "data_files": "dataset/source-era5_date-2022-01-01_res-1.0_levels-37_steps-01.nc"}, {"config_name": "source-era5_date-2022-01-01_res-1.0_levels-37_steps-04", "data_files": "dataset/source-era5_date-2022-01-01_res-1.0_levels-37_steps-04.nc"}, {"config_name": "source-era5_date-2022-01-01_res-1.0_levels-37_steps-12", "data_files": "dataset/source-era5_date-2022-01-01_res-1.0_levels-37_steps-12.nc"}, {"config_name": "source-era5_date-2022-01-01_res-1.0_levels-37_steps-20", "data_files": "dataset/source-era5_date-2022-01-01_res-1.0_levels-37_steps-20.nc"}]}
2023-12-29T02:01:03+00:00
[ "2212.12794" ]
[ "en" ]
TAGS #language-English #license-cc-by-4.0 #weather-forecasting #climate #arxiv-2212.12794 #region-us
# ECMWF's ERA5, HRES, (and fake) data, formatted for DeepMind GraphCast Original files are from this Google Cloud Bucket: URL This repo contains both the 'dataset' and 'stats' files needed for GraphCast inference. ## License and Attribution ECMWF data products are subject to the following terms: 1. Copyright statement: Copyright "© 2023 European Centre for Medium-Range Weather Forecasts (ECMWF)". 2. Source URL 3. Licence Statement: ECMWF data is published under a Creative Commons Attribution 4.0 International (CC BY 4.0). URL 4. Disclaimer: ECMWF does not accept any liability whatsoever for any error or omission in the data, their availability, or for any loss or damage arising from their use. ## Usage Use the Huggingface Hub file system to load files. The 'datasets' library doesn't support netCDF files yet. - Paper: URL - Preprint: URL
[ "# ECMWF's ERA5, HRES, (and fake) data, formatted for DeepMind GraphCast\n\nOriginal files are from this Google Cloud Bucket: URL\n\nThis repo contains both the 'dataset' and 'stats' files needed for GraphCast inference.", "## License and Attribution\nECMWF data products are subject to the following terms:\n\n1. Copyright statement: Copyright \"© 2023 European Centre for Medium-Range Weather Forecasts (ECMWF)\".\n2. Source URL\n3. Licence Statement: ECMWF data is published under a Creative Commons Attribution 4.0 International (CC BY 4.0). URL\n4. Disclaimer: ECMWF does not accept any liability whatsoever for any error or omission in the data, their availability, or for any loss or damage arising from their use.", "## Usage\nUse the Huggingface Hub file system to load files. The 'datasets' library doesn't support netCDF files yet.\n\n\n\n- Paper: URL\n- Preprint: URL" ]
[ "TAGS\n#language-English #license-cc-by-4.0 #weather-forecasting #climate #arxiv-2212.12794 #region-us \n", "# ECMWF's ERA5, HRES, (and fake) data, formatted for DeepMind GraphCast\n\nOriginal files are from this Google Cloud Bucket: URL\n\nThis repo contains both the 'dataset' and 'stats' files needed for GraphCast inference.", "## License and Attribution\nECMWF data products are subject to the following terms:\n\n1. Copyright statement: Copyright \"© 2023 European Centre for Medium-Range Weather Forecasts (ECMWF)\".\n2. Source URL\n3. Licence Statement: ECMWF data is published under a Creative Commons Attribution 4.0 International (CC BY 4.0). URL\n4. Disclaimer: ECMWF does not accept any liability whatsoever for any error or omission in the data, their availability, or for any loss or damage arising from their use.", "## Usage\nUse the Huggingface Hub file system to load files. The 'datasets' library doesn't support netCDF files yet.\n\n\n\n- Paper: URL\n- Preprint: URL" ]
[ 40, 62, 111, 42 ]
[ "passage: TAGS\n#language-English #license-cc-by-4.0 #weather-forecasting #climate #arxiv-2212.12794 #region-us \n# ECMWF's ERA5, HRES, (and fake) data, formatted for DeepMind GraphCast\n\nOriginal files are from this Google Cloud Bucket: URL\n\nThis repo contains both the 'dataset' and 'stats' files needed for GraphCast inference.## License and Attribution\nECMWF data products are subject to the following terms:\n\n1. Copyright statement: Copyright \"© 2023 European Centre for Medium-Range Weather Forecasts (ECMWF)\".\n2. Source URL\n3. Licence Statement: ECMWF data is published under a Creative Commons Attribution 4.0 International (CC BY 4.0). URL\n4. Disclaimer: ECMWF does not accept any liability whatsoever for any error or omission in the data, their availability, or for any loss or damage arising from their use.## Usage\nUse the Huggingface Hub file system to load files. The 'datasets' library doesn't support netCDF files yet.\n\n\n\n- Paper: URL\n- Preprint: URL" ]
3375790412fe5b50ba2e9afef8d2644ddf3e4511
# Dataset Card for "uf_no_to_questions" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
yimingzhang/uf_no_to_questions
[ "region:us" ]
2023-12-27T09:14:06+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train_prefs", "path": "data/train_prefs-*"}, {"split": "test_prefs", "path": "data/test_prefs-*"}]}], "dataset_info": {"features": [{"name": "chosen", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "rejected", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train_prefs", "num_bytes": 204821086, "num_examples": 61966}, {"name": "test_prefs", "num_bytes": 6610257, "num_examples": 2000}], "download_size": 115936255, "dataset_size": 211431343}}
2023-12-27T09:14:12+00:00
[]
[]
TAGS #region-us
# Dataset Card for "uf_no_to_questions" More Information needed
[ "# Dataset Card for \"uf_no_to_questions\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"uf_no_to_questions\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"uf_no_to_questions\"\n\nMore Information needed" ]
58f180e2ad2d5dcefd5e20a7067bc6774b2b2994
# MedQuad-1k: Llama 2 Formatting This is a subset (1000 samples) of the [`keivalya/MedQuad-MedicalQnADataset`](https://huggingface.co/datasets/keivalya/MedQuad-MedicalQnADataset) dataset, processed to match Llama 2's prompt format as described [in this article](https://huggingface.co/blog/llama2#how-to-prompt-llama-2).
thillaic/MedQuad-MedicalQnADataset-Llama2-1k
[ "task_categories:conversational", "size_categories:1K<n<10K", "language:en", "license:mit", "medical", "region:us" ]
2023-12-27T09:20:48+00:00
{"language": ["en"], "license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["conversational"], "pretty_name": "MedicalQnADataset", "tags": ["medical"]}
2023-12-27T13:51:17+00:00
[]
[ "en" ]
TAGS #task_categories-conversational #size_categories-1K<n<10K #language-English #license-mit #medical #region-us
# MedQuad-1k: Llama 2 Formatting This is a subset (1000 samples) of the 'keivalya/MedQuad-MedicalQnADataset' dataset, processed to match Llama 2's prompt format as described in this article.
[ "# MedQuad-1k: Llama 2 Formatting\n\nThis is a subset (1000 samples) of the 'keivalya/MedQuad-MedicalQnADataset' dataset, processed to match Llama 2's prompt format as described in this article." ]
[ "TAGS\n#task_categories-conversational #size_categories-1K<n<10K #language-English #license-mit #medical #region-us \n", "# MedQuad-1k: Llama 2 Formatting\n\nThis is a subset (1000 samples) of the 'keivalya/MedQuad-MedicalQnADataset' dataset, processed to match Llama 2's prompt format as described in this article." ]
[ 40, 61 ]
[ "passage: TAGS\n#task_categories-conversational #size_categories-1K<n<10K #language-English #license-mit #medical #region-us \n# MedQuad-1k: Llama 2 Formatting\n\nThis is a subset (1000 samples) of the 'keivalya/MedQuad-MedicalQnADataset' dataset, processed to match Llama 2's prompt format as described in this article." ]
12d91b7a482bd47e046400e998c277e3a7c76207
# Dataset Card for "dabur_361" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
nitinbhayana/dabur_361
[ "region:us" ]
2023-12-27T09:20:54+00:00
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 140926, "num_examples": 1037}, {"name": "test", "num_bytes": 63802, "num_examples": 462}], "download_size": 98490, "dataset_size": 204728}}
2023-12-27T09:20:59+00:00
[]
[]
TAGS #region-us
# Dataset Card for "dabur_361" More Information needed
[ "# Dataset Card for \"dabur_361\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"dabur_361\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"dabur_361\"\n\nMore Information needed" ]
6a63d6dabeb4c1a4dc8bd0943d5b24936b7fcf03
# Dataset Card for "d2p_raw" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tyzhu/d2p_raw
[ "region:us" ]
2023-12-27T09:45:24+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 261906, "num_examples": 900}, {"name": "test", "num_bytes": 72560, "num_examples": 300}, {"name": "validation", "num_bytes": 72560, "num_examples": 300}], "download_size": 78161, "dataset_size": 407026}}
2023-12-27T10:29:31+00:00
[]
[]
TAGS #region-us
# Dataset Card for "d2p_raw" More Information needed
[ "# Dataset Card for \"d2p_raw\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"d2p_raw\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"d2p_raw\"\n\nMore Information needed" ]
56af31c2bb770371974508a47b3e8193a73dc3af
# Dataset Card for "oasst-ru-dpo-v1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
0x7o/oasst-ru-dpo-v1
[ "region:us" ]
2023-12-27T10:13:54+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3847248.0, "num_examples": 1322}], "download_size": 1926633, "dataset_size": 3847248.0}}
2023-12-27T10:13:58+00:00
[]
[]
TAGS #region-us
# Dataset Card for "oasst-ru-dpo-v1" More Information needed
[ "# Dataset Card for \"oasst-ru-dpo-v1\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"oasst-ru-dpo-v1\"\n\nMore Information needed" ]
[ 6, 21 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"oasst-ru-dpo-v1\"\n\nMore Information needed" ]
e4e262bd0ceb3c063af37e4cb76f4a59ad91b7eb
# Trelis Function Calling Dataset - VERSION 3 - SAMPLE > This is a SAMPLE of the v3 dataset available for purchase [here](https://huggingface.co/datasets/Trelis/function_calling_v3/edit/main/README.md). Features: - Allows models to be fine-tuned for function-calling. - The dataset is human generated and does not make use of Llama 2 or OpenAI! - The dataset includes 66 training rows, 19 validation rows and 5 test rows (for manual evaluation). - Based on eight functions: search_bing, search_arxiv, save_chat, read_json_file, list_files, get_current_weather, delete_file, clear_chat Alternatively, you can find pre-trained function calling models on [Trelis Mart](https://mart.trelis.com) ## Updates since v2 - Cross-compatible function format: The format now matches OpenAI's function format, making it easy to migrate from using OpenAI APIs to any models fine-tuned with this dataset. - Chain function calling: Ability (particularly with larger models) to first make a call to one function in order to get data for a second function call. - Supported by inferencing scripts, read more below. --Change-log-- 04Dec2023 - Official release of function_calling_v3 02Dec2023 - Pre-release of function_calling_v3 ## Inference Scripts Out-of-the-box inference scripts are available for purchase: - Purchase only the function calling inference scripts, [HERE](https://buy.stripe.com/28o00M9K50zp4ow4hf) - Purchase as part of the full ADVANCED-inference repo, [HERE](https://trelis.com/enterprise-server-api-and-inference-guide/). ## Fine-Tuning Notes and Scripts The objective of function calling is for the model to return a structured json object *and nothing else*. The performance of fine-tuning depends **strongly** on how the attention mask and loss mask are set. For further details see the [Youtube Video Here](https://youtu.be/OQdp-OeG1as). The fine-tuning script is available for purchase alone [here](https://buy.stripe.com/fZe14Qe0l81R9IQaFy), or is included in the ADVANCED-fine-tuning repository available for purchase on [Trelis.com](https://trelis.com). ### QLoRa Training Notebook for Llama 2 (FREE) - Access a basic Google Colab script for fine-tuning [here](https://colab.research.google.com/drive/1uMSS1o_8YOPyG1X_4k6ENEE3kJfBGGhH?usp=sharing). ## Licensing The Function Calling Extended dataset is suitable for commercial use. Further terms: - Licenses are not transferable to other users/entities. - The dataset may not be re-published in it's current or derivative form. - The dataset may be used to train and fine-tune commercial language models. ### Attribution of data sources This project includes data from the TruthfulQA dataset, which is available at: https://huggingface.co/datasets/truthful_qa. The truthful_qa dataset is licensed under the Apache License 2.0, Copyright (C) 2023, Stephanie Lin, Jacob Hilton, and Owain Evans. ## Prompt Format (example below is for openchat) ``` B_FUNC, E_FUNC = "You have access to the following functions. Use them if required:\n\n", "\n\n" B_INST, E_INST = "GPT4 Correct User: ", "<|end_of_turn|>GPT4 Correct Assistant:" #OpenChat style # B_INST, E_INST = "[INST] ", " [/INST]" #Llama 2 style functionList = data['test'][index]['functionList'] user_prompt = data['test'][index]['userPrompt'] correct_answer = data['test'][index]['assistantResponse'] prompt = f"{E_FUNC}{B_FUNC}{functionList.strip()}{E_FUNC}{B_INST}{user_prompt.strip()}{E_INST}\n\n" ``` ## Sample Prompt and Response: ``` You have access to the following functions. Use them if required: [ { "type": "function", "function": { "name": "get_stock_price", "description": "Get the stock price of an array of stocks", "parameters": { "type": "object", "properties": { "names": { "type": "array", "items": { "type": "string" }, "description": "An array of stocks" } }, "required": [ "names" ] } } }, { "type": "function", "function": { "name": "get_big_stocks", "description": "Get the names of the largest N stocks by market cap", "parameters": { "type": "object", "properties": { "number": { "type": "integer", "description": "The number of largest stocks to get the names of, e.g. 25" }, "region": { "type": "string", "description": "The region to consider, can be \"US\" or \"World\"." } }, "required": [ "number" ] } } } ]GPT4 Correct User: Get the price of Apple's stock<|end_of_turn|>GPT4 Correct Assistant:{ "name": "get_stock_price", "arguments": { "names": [ "Apple" ] } }<|end_of_turn|> ``` ## CSV File Structure The generated CSV file has the following columns: - `functionList`: Descriptions of two functions (the current function and a randomly selected other function). - `userPrompt`: The user's prompt. - `assistantResponse`: The assistant's response. ### JSON File Structure Function metadata format follows the OpenAI standard. Each function file should be a JSON file with the following structure: ```json { "type": "function", "function": { "name": "function_name", "description": "function description", "parameters": { "type": "object", "properties": { "property_1": { "type": "property_type", //#e.g. string "description": "property description" }, "property_2": { "type": "property_type", //#e.g. string "description": "property description" } }, "required": ["property_1","property_2"] } }, "samplePromptResponsePairs": [ { "prompt": "sample_prompt", "response": { "name": "generate_password", "arguments": { "property_1": "property_value", "property_2": "property_value" } } }, ... ] } ``` The `functionMetaData` object describes the function. The `samplePromptResponsePairs` array contains sample prompts and responses for the function. ### Testing JSON Structure A script named `validate.py` can be used to validate the structure of a function JSON file. It checks for the presence and correct types of all necessary keys in the JSON structure. To use the script, call it from the command line with the name of the function file as an argument: ``` python validate.py my_function.json ``` ## Repo Structure (for prompt dataset generation) - `functions/`: This directory contains function files, each of which is a JSON file with a specific structure that describes a function and its sample prompts and responses. - `generate_dataset.py`: This Python script generates the base training and testing dataset CSV files. The first example in each function json file is used in the validation dataset and the rest are used for the train dataset. - `addBlank.py`: This adds in truthfulqa questions and answers after system prompts with functions. - `text_responses.py`: adds in prompts to accustomise the model to the presence of function descriptions at the start of prompt sequences. There are also, some equivalent files for generating a test dataset - to be used for manual evaluation: - `test_functions/`: contains functions for manual evaluation, different to the training and test set of functions. - create_test_datasets.py - which runs createTestPrompts.py and test_text_responses.py - createTestPrompts.py which creates data rows to test function calling without and without required arguments provided, as well as one chain function calling test (e.g. where one function must be called before the other). - test_text_responses.py generates data rows to test out simple prompts (e.g. Greetings!), short non-sensical prompts (e.g. "shop"), and also a standard question (What planets are in our solar system?).
Trelis/function_calling_v3_SAMPLE
[ "task_categories:question-answering", "task_categories:conversational", "task_categories:text-generation", "size_categories:n<1K", "language:en", "function call", "function calling", "function-calling", "region:us" ]
2023-12-27T10:18:25+00:00
{"language": ["en"], "size_categories": ["n<1K"], "task_categories": ["question-answering", "conversational", "text-generation"], "tags": ["function call", "function calling", "function-calling"]}
2023-12-27T10:23:22+00:00
[]
[ "en" ]
TAGS #task_categories-question-answering #task_categories-conversational #task_categories-text-generation #size_categories-n<1K #language-English #function call #function calling #function-calling #region-us
# Trelis Function Calling Dataset - VERSION 3 - SAMPLE > This is a SAMPLE of the v3 dataset available for purchase here. Features: - Allows models to be fine-tuned for function-calling. - The dataset is human generated and does not make use of Llama 2 or OpenAI! - The dataset includes 66 training rows, 19 validation rows and 5 test rows (for manual evaluation). - Based on eight functions: search_bing, search_arxiv, save_chat, read_json_file, list_files, get_current_weather, delete_file, clear_chat Alternatively, you can find pre-trained function calling models on Trelis Mart ## Updates since v2 - Cross-compatible function format: The format now matches OpenAI's function format, making it easy to migrate from using OpenAI APIs to any models fine-tuned with this dataset. - Chain function calling: Ability (particularly with larger models) to first make a call to one function in order to get data for a second function call. - Supported by inferencing scripts, read more below. --Change-log-- 04Dec2023 - Official release of function_calling_v3 02Dec2023 - Pre-release of function_calling_v3 ## Inference Scripts Out-of-the-box inference scripts are available for purchase: - Purchase only the function calling inference scripts, HERE - Purchase as part of the full ADVANCED-inference repo, HERE. ## Fine-Tuning Notes and Scripts The objective of function calling is for the model to return a structured json object *and nothing else*. The performance of fine-tuning depends strongly on how the attention mask and loss mask are set. For further details see the Youtube Video Here. The fine-tuning script is available for purchase alone here, or is included in the ADVANCED-fine-tuning repository available for purchase on URL. ### QLoRa Training Notebook for Llama 2 (FREE) - Access a basic Google Colab script for fine-tuning here. ## Licensing The Function Calling Extended dataset is suitable for commercial use. Further terms: - Licenses are not transferable to other users/entities. - The dataset may not be re-published in it's current or derivative form. - The dataset may be used to train and fine-tune commercial language models. ### Attribution of data sources This project includes data from the TruthfulQA dataset, which is available at: URL The truthful_qa dataset is licensed under the Apache License 2.0, Copyright (C) 2023, Stephanie Lin, Jacob Hilton, and Owain Evans. ## Prompt Format (example below is for openchat) ## Sample Prompt and Response: ## CSV File Structure The generated CSV file has the following columns: - 'functionList': Descriptions of two functions (the current function and a randomly selected other function). - 'userPrompt': The user's prompt. - 'assistantResponse': The assistant's response. ### JSON File Structure Function metadata format follows the OpenAI standard. Each function file should be a JSON file with the following structure: The 'functionMetaData' object describes the function. The 'samplePromptResponsePairs' array contains sample prompts and responses for the function. ### Testing JSON Structure A script named 'URL' can be used to validate the structure of a function JSON file. It checks for the presence and correct types of all necessary keys in the JSON structure. To use the script, call it from the command line with the name of the function file as an argument: ## Repo Structure (for prompt dataset generation) - 'functions/': This directory contains function files, each of which is a JSON file with a specific structure that describes a function and its sample prompts and responses. - 'generate_dataset.py': This Python script generates the base training and testing dataset CSV files. The first example in each function json file is used in the validation dataset and the rest are used for the train dataset. - 'URL': This adds in truthfulqa questions and answers after system prompts with functions. - 'text_responses.py': adds in prompts to accustomise the model to the presence of function descriptions at the start of prompt sequences. There are also, some equivalent files for generating a test dataset - to be used for manual evaluation: - 'test_functions/': contains functions for manual evaluation, different to the training and test set of functions. - create_test_datasets.py - which runs URL and test_text_responses.py - URL which creates data rows to test function calling without and without required arguments provided, as well as one chain function calling test (e.g. where one function must be called before the other). - test_text_responses.py generates data rows to test out simple prompts (e.g. Greetings!), short non-sensical prompts (e.g. "shop"), and also a standard question (What planets are in our solar system?).
[ "# Trelis Function Calling Dataset - VERSION 3 - SAMPLE\n\n> This is a SAMPLE of the v3 dataset available for purchase here. \n\nFeatures:\n- Allows models to be fine-tuned for function-calling.\n- The dataset is human generated and does not make use of Llama 2 or OpenAI!\n- The dataset includes 66 training rows, 19 validation rows and 5 test rows (for manual evaluation).\n- Based on eight functions: search_bing, search_arxiv, save_chat, read_json_file, list_files, get_current_weather, delete_file, clear_chat\n\nAlternatively, you can find pre-trained function calling models on Trelis Mart", "## Updates since v2\n- Cross-compatible function format: The format now matches OpenAI's function format, making it easy to migrate from using OpenAI APIs to any models fine-tuned with this dataset.\n- Chain function calling: Ability (particularly with larger models) to first make a call to one function in order to get data for a second function call.\n- Supported by inferencing scripts, read more below.\n\n--Change-log--\n\n04Dec2023 - Official release of function_calling_v3\n\n02Dec2023 - Pre-release of function_calling_v3", "## Inference Scripts\nOut-of-the-box inference scripts are available for purchase:\n- Purchase only the function calling inference scripts, HERE\n- Purchase as part of the full ADVANCED-inference repo, HERE.", "## Fine-Tuning Notes and Scripts\n\nThe objective of function calling is for the model to return a structured json object *and nothing else*. The performance of fine-tuning depends strongly on how the attention mask and loss mask are set. For further details see the Youtube Video Here.\n\nThe fine-tuning script is available for purchase alone here, or is included in the ADVANCED-fine-tuning repository available for purchase on URL.", "### QLoRa Training Notebook for Llama 2 (FREE)\n- Access a basic Google Colab script for fine-tuning here.", "## Licensing\nThe Function Calling Extended dataset is suitable for commercial use.\n\nFurther terms:\n- Licenses are not transferable to other users/entities.\n- The dataset may not be re-published in it's current or derivative form.\n- The dataset may be used to train and fine-tune commercial language models.", "### Attribution of data sources\n\nThis project includes data from the TruthfulQA dataset, which is available at: URL The truthful_qa dataset is licensed under the Apache License 2.0, Copyright (C) 2023, Stephanie Lin, Jacob Hilton, and Owain Evans.", "## Prompt Format (example below is for openchat)", "## Sample Prompt and Response:", "## CSV File Structure\n\nThe generated CSV file has the following columns:\n\n- 'functionList': Descriptions of two functions (the current function and a randomly selected other function).\n- 'userPrompt': The user's prompt.\n- 'assistantResponse': The assistant's response.", "### JSON File Structure\n\nFunction metadata format follows the OpenAI standard.\n\nEach function file should be a JSON file with the following structure:\n\n\n\nThe 'functionMetaData' object describes the function. The 'samplePromptResponsePairs' array contains sample prompts and responses for the function.", "### Testing JSON Structure\n\nA script named 'URL' can be used to validate the structure of a function JSON file. It checks for the presence and correct types of all necessary keys in the JSON structure.\n\nTo use the script, call it from the command line with the name of the function file as an argument:", "## Repo Structure (for prompt dataset generation)\n\n- 'functions/': This directory contains function files, each of which is a JSON file with a specific structure that describes a function and its sample prompts and responses.\n- 'generate_dataset.py': This Python script generates the base training and testing dataset CSV files. The first example in each function json file is used in the validation dataset and the rest are used for the train dataset.\n- 'URL': This adds in truthfulqa questions and answers after system prompts with functions.\n- 'text_responses.py': adds in prompts to accustomise the model to the presence of function descriptions at the start of prompt sequences.\n\nThere are also, some equivalent files for generating a test dataset - to be used for manual evaluation:\n- 'test_functions/': contains functions for manual evaluation, different to the training and test set of functions.\n- create_test_datasets.py - which runs URL and test_text_responses.py\n- URL which creates data rows to test function calling without and without required arguments provided, as well as one chain function calling test (e.g. where one function must be called before the other).\n- test_text_responses.py generates data rows to test out simple prompts (e.g. Greetings!), short non-sensical prompts (e.g. \"shop\"), and also a standard question (What planets are in our solar system?)." ]
[ "TAGS\n#task_categories-question-answering #task_categories-conversational #task_categories-text-generation #size_categories-n<1K #language-English #function call #function calling #function-calling #region-us \n", "# Trelis Function Calling Dataset - VERSION 3 - SAMPLE\n\n> This is a SAMPLE of the v3 dataset available for purchase here. \n\nFeatures:\n- Allows models to be fine-tuned for function-calling.\n- The dataset is human generated and does not make use of Llama 2 or OpenAI!\n- The dataset includes 66 training rows, 19 validation rows and 5 test rows (for manual evaluation).\n- Based on eight functions: search_bing, search_arxiv, save_chat, read_json_file, list_files, get_current_weather, delete_file, clear_chat\n\nAlternatively, you can find pre-trained function calling models on Trelis Mart", "## Updates since v2\n- Cross-compatible function format: The format now matches OpenAI's function format, making it easy to migrate from using OpenAI APIs to any models fine-tuned with this dataset.\n- Chain function calling: Ability (particularly with larger models) to first make a call to one function in order to get data for a second function call.\n- Supported by inferencing scripts, read more below.\n\n--Change-log--\n\n04Dec2023 - Official release of function_calling_v3\n\n02Dec2023 - Pre-release of function_calling_v3", "## Inference Scripts\nOut-of-the-box inference scripts are available for purchase:\n- Purchase only the function calling inference scripts, HERE\n- Purchase as part of the full ADVANCED-inference repo, HERE.", "## Fine-Tuning Notes and Scripts\n\nThe objective of function calling is for the model to return a structured json object *and nothing else*. The performance of fine-tuning depends strongly on how the attention mask and loss mask are set. For further details see the Youtube Video Here.\n\nThe fine-tuning script is available for purchase alone here, or is included in the ADVANCED-fine-tuning repository available for purchase on URL.", "### QLoRa Training Notebook for Llama 2 (FREE)\n- Access a basic Google Colab script for fine-tuning here.", "## Licensing\nThe Function Calling Extended dataset is suitable for commercial use.\n\nFurther terms:\n- Licenses are not transferable to other users/entities.\n- The dataset may not be re-published in it's current or derivative form.\n- The dataset may be used to train and fine-tune commercial language models.", "### Attribution of data sources\n\nThis project includes data from the TruthfulQA dataset, which is available at: URL The truthful_qa dataset is licensed under the Apache License 2.0, Copyright (C) 2023, Stephanie Lin, Jacob Hilton, and Owain Evans.", "## Prompt Format (example below is for openchat)", "## Sample Prompt and Response:", "## CSV File Structure\n\nThe generated CSV file has the following columns:\n\n- 'functionList': Descriptions of two functions (the current function and a randomly selected other function).\n- 'userPrompt': The user's prompt.\n- 'assistantResponse': The assistant's response.", "### JSON File Structure\n\nFunction metadata format follows the OpenAI standard.\n\nEach function file should be a JSON file with the following structure:\n\n\n\nThe 'functionMetaData' object describes the function. The 'samplePromptResponsePairs' array contains sample prompts and responses for the function.", "### Testing JSON Structure\n\nA script named 'URL' can be used to validate the structure of a function JSON file. It checks for the presence and correct types of all necessary keys in the JSON structure.\n\nTo use the script, call it from the command line with the name of the function file as an argument:", "## Repo Structure (for prompt dataset generation)\n\n- 'functions/': This directory contains function files, each of which is a JSON file with a specific structure that describes a function and its sample prompts and responses.\n- 'generate_dataset.py': This Python script generates the base training and testing dataset CSV files. The first example in each function json file is used in the validation dataset and the rest are used for the train dataset.\n- 'URL': This adds in truthfulqa questions and answers after system prompts with functions.\n- 'text_responses.py': adds in prompts to accustomise the model to the presence of function descriptions at the start of prompt sequences.\n\nThere are also, some equivalent files for generating a test dataset - to be used for manual evaluation:\n- 'test_functions/': contains functions for manual evaluation, different to the training and test set of functions.\n- create_test_datasets.py - which runs URL and test_text_responses.py\n- URL which creates data rows to test function calling without and without required arguments provided, as well as one chain function calling test (e.g. where one function must be called before the other).\n- test_text_responses.py generates data rows to test out simple prompts (e.g. Greetings!), short non-sensical prompts (e.g. \"shop\"), and also a standard question (What planets are in our solar system?)." ]
[ 64, 168, 138, 57, 100, 30, 76, 60, 15, 9, 72, 74, 73, 348 ]
[ "passage: TAGS\n#task_categories-question-answering #task_categories-conversational #task_categories-text-generation #size_categories-n<1K #language-English #function call #function calling #function-calling #region-us \n# Trelis Function Calling Dataset - VERSION 3 - SAMPLE\n\n> This is a SAMPLE of the v3 dataset available for purchase here. \n\nFeatures:\n- Allows models to be fine-tuned for function-calling.\n- The dataset is human generated and does not make use of Llama 2 or OpenAI!\n- The dataset includes 66 training rows, 19 validation rows and 5 test rows (for manual evaluation).\n- Based on eight functions: search_bing, search_arxiv, save_chat, read_json_file, list_files, get_current_weather, delete_file, clear_chat\n\nAlternatively, you can find pre-trained function calling models on Trelis Mart## Updates since v2\n- Cross-compatible function format: The format now matches OpenAI's function format, making it easy to migrate from using OpenAI APIs to any models fine-tuned with this dataset.\n- Chain function calling: Ability (particularly with larger models) to first make a call to one function in order to get data for a second function call.\n- Supported by inferencing scripts, read more below.\n\n--Change-log--\n\n04Dec2023 - Official release of function_calling_v3\n\n02Dec2023 - Pre-release of function_calling_v3## Inference Scripts\nOut-of-the-box inference scripts are available for purchase:\n- Purchase only the function calling inference scripts, HERE\n- Purchase as part of the full ADVANCED-inference repo, HERE.", "passage: ## Fine-Tuning Notes and Scripts\n\nThe objective of function calling is for the model to return a structured json object *and nothing else*. The performance of fine-tuning depends strongly on how the attention mask and loss mask are set. For further details see the Youtube Video Here.\n\nThe fine-tuning script is available for purchase alone here, or is included in the ADVANCED-fine-tuning repository available for purchase on URL.### QLoRa Training Notebook for Llama 2 (FREE)\n- Access a basic Google Colab script for fine-tuning here.## Licensing\nThe Function Calling Extended dataset is suitable for commercial use.\n\nFurther terms:\n- Licenses are not transferable to other users/entities.\n- The dataset may not be re-published in it's current or derivative form.\n- The dataset may be used to train and fine-tune commercial language models.### Attribution of data sources\n\nThis project includes data from the TruthfulQA dataset, which is available at: URL The truthful_qa dataset is licensed under the Apache License 2.0, Copyright (C) 2023, Stephanie Lin, Jacob Hilton, and Owain Evans.## Prompt Format (example below is for openchat)## Sample Prompt and Response:## CSV File Structure\n\nThe generated CSV file has the following columns:\n\n- 'functionList': Descriptions of two functions (the current function and a randomly selected other function).\n- 'userPrompt': The user's prompt.\n- 'assistantResponse': The assistant's response.### JSON File Structure\n\nFunction metadata format follows the OpenAI standard.\n\nEach function file should be a JSON file with the following structure:\n\n\n\nThe 'functionMetaData' object describes the function. The 'samplePromptResponsePairs' array contains sample prompts and responses for the function.### Testing JSON Structure\n\nA script named 'URL' can be used to validate the structure of a function JSON file. It checks for the presence and correct types of all necessary keys in the JSON structure.\n\nTo use the script, call it from the command line with the name of the function file as an argument:" ]
41d2c098f216a057e17818f9ab048c6b7b4865be
## Prepared dataset from roneneldan/TinyStoriesV2-GPT4 # Data Preparation pipeline. - Download TinyStoriesV2-GPT4-train.txt from https://huggingface.co/datasets/roneneldan/TinyStories/blob/main/TinyStoriesV2-GPT4-train.txt ``` raw = open('TinyStoriesV2-GPT4-train.txt').readlines() stories = [] for x in tqdm(raw,total=len(raw)): if x=='\n': continue if x.startswith('<|endoftext|>'): chunk.append(x.strip()) stories.append(" ".join(chunk)) chunk=[] continue chunk.append(x.strip()) prep = [{'text':text} for text in stories] Dataset.from_list(prep) ``` - Repeat for validation split
maveriq/tinystoriesv2_gpt4
[ "task_categories:text-generation", "size_categories:1M<n<10M", "language:en", "region:us" ]
2023-12-27T10:34:06+00:00
{"language": ["en"], "size_categories": ["1M<n<10M"], "task_categories": ["text-generation"], "pretty_name": "TinyStoriesV2-GPT4", "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2234135574, "num_examples": 2717699}, {"name": "valid", "num_bytes": 22567397, "num_examples": 27630}], "download_size": 1153194030, "dataset_size": 2256702971}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "valid", "path": "data/valid-*"}]}]}
2023-12-27T10:46:58+00:00
[]
[ "en" ]
TAGS #task_categories-text-generation #size_categories-1M<n<10M #language-English #region-us
## Prepared dataset from roneneldan/TinyStoriesV2-GPT4 # Data Preparation pipeline. - Download URL from URL - Repeat for validation split
[ "## Prepared dataset from roneneldan/TinyStoriesV2-GPT4", "# Data Preparation pipeline. \n- Download URL from URL\n\n\n- Repeat for validation split" ]
[ "TAGS\n#task_categories-text-generation #size_categories-1M<n<10M #language-English #region-us \n", "## Prepared dataset from roneneldan/TinyStoriesV2-GPT4", "# Data Preparation pipeline. \n- Download URL from URL\n\n\n- Repeat for validation split" ]
[ 33, 22, 20 ]
[ "passage: TAGS\n#task_categories-text-generation #size_categories-1M<n<10M #language-English #region-us \n## Prepared dataset from roneneldan/TinyStoriesV2-GPT4# Data Preparation pipeline. \n- Download URL from URL\n\n\n- Repeat for validation split" ]
ef9d773aba938b72997c455b49b74ca67e16e071
# Web Camera Face Liveness Detection The dataset consists of videos featuring individuals wearing various types of masks. Videos are recorded under **different lighting conditions** and with **different attributes** (*glasses, masks, hats, hoods, wigs, and mustaches for men*). In the dataset, there are **7 types of videos** filmed on a web camera: - **Silicone Mask** - demonstration of a silicone mask attack (*silicone*) - **2D mask with holes for eyes** - demonstration of an attack with a paper/cardboard mask (*mask*) - **2D mask** - demonstration of an attack with a paper/cardboard silhouette (*outline*) - **Monitor Replay Attack** - demonstration of an attack from a monitor (*monitor *) - **A4 Photo Attack** - demonstration of a paper/cardboard A4 photo attack (*print*) - **A4 Photo with holes for eyes, nose and mouth** - demonstration of a paper/cardboard A4 photo attack with cutouts for the eyes, nose, and mouth (*print_cut*) - **Real Video** - demonstration of a real person's face (*real*) ![](https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F12421376%2F87e1747ec7c320f55d668d18396d9c4f%2FFrame%2062.png?generation=1700568627549534&alt=media) The dataset allows researchers and developers in recognizing and analyzing **facial expressions, anti-spoofing tasks, face detection, re-identification and face recognition tasks**. The inclusion of various attributes and different lighting conditions aims to enhance the **robustness and effectiveness** of anti-spoofing models in real-world scenarios. ## Full version of the dataset includes 30,000+ videos of people, leave a request on **[TrainingData](https://trainingdata.pro/data-market/web-device-face-liveness-detection?utm_source=huggingface&utm_medium=cpc&utm_campaign=web-camera-face-liveness-detection)** to buy the dataset ### Statistics for the dataset (gender and type of the attack): ![](https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F12421376%2F28544e6fe467904bc32107412cf61088%2FFrame%2061.png?generation=1700567253550788&alt=media) # Get the Dataset ## This is just an example of the data Leave a request on **[https://trainingdata.pro/data-market](https://trainingdata.pro/data-market/web-device-face-liveness-detection?utm_source=huggingface&utm_medium=cpc&utm_campaign=web-camera-face-liveness-detection) to learn about the price and buy the dataset** # Content The folder **files** includes: - **mask** - includes videos of people wearing a 2D mask with holes for eyes, - **monitor** - includes videos with demonstration of an attack from a monitor, - **outline** - includes videos of people wearing a 2D mask, - **print** - includes videos of people with an A4 photo, - **print_cut** - includes videos of people with an A4 photo with holes for eyes, nose and mouth, - **real** - includes real videos of people, - **silicone** - includes videos of people wearing a silicone mask ### File with the extension .csv - **file**: link to access the file, - **type**: type of the video (*real, mask, outline, print, print_cut, silicone, monitor*) ## **[TrainingData](https://trainingdata.pro/data-market/web-device-face-liveness-detection?utm_source=huggingface&utm_medium=cpc&utm_campaign=web-camera-face-liveness-detection)** provides high-quality data annotation tailored to your needs More datasets in TrainingData's Kaggle account: **<https://www.kaggle.com/trainingdatapro/datasets>** TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** *keywords: ibeta level 1, ibeta level 2, liveness detection systems, liveness detection dataset, biometric dataset, biometric data dataset, biometric system attacks, anti-spoofing dataset, face liveness detection, deep learning dataset, face spoofing database, face anti-spoofing, face recognition, face detection, face identification, human video dataset, video dataset, presentation attack detection, presentation attack dataset, 2d print attacks, print 2d attacks dataset, printed 2d masks dataset, spoofing in 2D face recognition, facial masks, 2D face recognition systems, detecting face spoofing attacks, detecting presentation attacks, computer vision, surveillance face anti-spoofing, face liveness detection software solution, silicone masks attacks*
TrainingDataPro/web-camera-face-liveness-detection
[ "task_categories:video-classification", "task_categories:image-classification", "task_categories:image-to-image", "language:en", "license:cc-by-nc-nd-4.0", "code", "finance", "biology", "legal", "region:us" ]
2023-12-27T12:30:24+00:00
{"language": ["en"], "license": "cc-by-nc-nd-4.0", "task_categories": ["video-classification", "image-classification", "image-to-image"], "tags": ["code", "finance", "biology", "legal"]}
2023-12-27T13:20:50+00:00
[]
[ "en" ]
TAGS #task_categories-video-classification #task_categories-image-classification #task_categories-image-to-image #language-English #license-cc-by-nc-nd-4.0 #code #finance #biology #legal #region-us
# Web Camera Face Liveness Detection The dataset consists of videos featuring individuals wearing various types of masks. Videos are recorded under different lighting conditions and with different attributes (*glasses, masks, hats, hoods, wigs, and mustaches for men*). In the dataset, there are 7 types of videos filmed on a web camera: - Silicone Mask - demonstration of a silicone mask attack (*silicone*) - 2D mask with holes for eyes - demonstration of an attack with a paper/cardboard mask (*mask*) - 2D mask - demonstration of an attack with a paper/cardboard silhouette (*outline*) - Monitor Replay Attack - demonstration of an attack from a monitor (*monitor *) - A4 Photo Attack - demonstration of a paper/cardboard A4 photo attack (*print*) - A4 Photo with holes for eyes, nose and mouth - demonstration of a paper/cardboard A4 photo attack with cutouts for the eyes, nose, and mouth (*print_cut*) - Real Video - demonstration of a real person's face (*real*) ![](URL The dataset allows researchers and developers in recognizing and analyzing facial expressions, anti-spoofing tasks, face detection, re-identification and face recognition tasks. The inclusion of various attributes and different lighting conditions aims to enhance the robustness and effectiveness of anti-spoofing models in real-world scenarios. ## Full version of the dataset includes 30,000+ videos of people, leave a request on TrainingData to buy the dataset ### Statistics for the dataset (gender and type of the attack): ![](URL # Get the Dataset ## This is just an example of the data Leave a request on URL to learn about the price and buy the dataset # Content The folder files includes: - mask - includes videos of people wearing a 2D mask with holes for eyes, - monitor - includes videos with demonstration of an attack from a monitor, - outline - includes videos of people wearing a 2D mask, - print - includes videos of people with an A4 photo, - print_cut - includes videos of people with an A4 photo with holes for eyes, nose and mouth, - real - includes real videos of people, - silicone - includes videos of people wearing a silicone mask ### File with the extension .csv - file: link to access the file, - type: type of the video (*real, mask, outline, print, print_cut, silicone, monitor*) ## TrainingData provides high-quality data annotation tailored to your needs More datasets in TrainingData's Kaggle account: <URL TrainingData's GitHub: URL *keywords: ibeta level 1, ibeta level 2, liveness detection systems, liveness detection dataset, biometric dataset, biometric data dataset, biometric system attacks, anti-spoofing dataset, face liveness detection, deep learning dataset, face spoofing database, face anti-spoofing, face recognition, face detection, face identification, human video dataset, video dataset, presentation attack detection, presentation attack dataset, 2d print attacks, print 2d attacks dataset, printed 2d masks dataset, spoofing in 2D face recognition, facial masks, 2D face recognition systems, detecting face spoofing attacks, detecting presentation attacks, computer vision, surveillance face anti-spoofing, face liveness detection software solution, silicone masks attacks*
[ "# Web Camera Face Liveness Detection\n\nThe dataset consists of videos featuring individuals wearing various types of masks. Videos are recorded under different lighting conditions and with different attributes (*glasses, masks, hats, hoods, wigs, and mustaches for men*).\n\nIn the dataset, there are 7 types of videos filmed on a web camera:\n\n- Silicone Mask - demonstration of a silicone mask attack (*silicone*)\n- 2D mask with holes for eyes - demonstration of an attack with a paper/cardboard mask (*mask*)\n- 2D mask - demonstration of an attack with a paper/cardboard silhouette (*outline*)\n- Monitor Replay Attack - demonstration of an attack from a monitor (*monitor *)\n- A4 Photo Attack - demonstration of a paper/cardboard A4 photo attack (*print*)\n- A4 Photo with holes for eyes, nose and mouth - demonstration of a paper/cardboard A4 photo attack with cutouts for the eyes, nose, and mouth (*print_cut*)\n- Real Video - demonstration of a real person's face (*real*)\n\n![](URL\n\nThe dataset allows researchers and developers in recognizing and analyzing facial expressions, anti-spoofing tasks, face detection, re-identification and face recognition tasks. The inclusion of various attributes and different lighting conditions aims to enhance the robustness and effectiveness of anti-spoofing models in real-world scenarios.", "## Full version of the dataset includes 30,000+ videos of people, leave a request on TrainingData to buy the dataset", "### Statistics for the dataset (gender and type of the attack):\n\n![](URL", "# Get the Dataset", "## This is just an example of the data \nLeave a request on URL to learn about the price and buy the dataset", "# Content\nThe folder files includes:\n- mask - includes videos of people wearing a 2D mask with holes for eyes,\n- monitor - includes videos with demonstration of an attack from a monitor,\n- outline - includes videos of people wearing a 2D mask,\n- print - includes videos of people with an A4 photo,\n- print_cut - includes videos of people with an A4 photo with holes for eyes, nose and mouth,\n- real - includes real videos of people,\n- silicone - includes videos of people wearing a silicone mask", "### File with the extension .csv\n- file: link to access the file,\n- type: type of the video (*real, mask, outline, print, print_cut, silicone, monitor*)", "## TrainingData provides high-quality data annotation tailored to your needs\n\nMore datasets in TrainingData's Kaggle account: <URL\n\nTrainingData's GitHub: URL\n\n*keywords: ibeta level 1, ibeta level 2, liveness detection systems, liveness detection dataset, biometric dataset, biometric data dataset, biometric system attacks, anti-spoofing dataset, face liveness detection, deep learning dataset, face spoofing database, face anti-spoofing, face recognition, face detection, face identification, human video dataset, video dataset, presentation attack detection, presentation attack dataset, 2d print attacks, print 2d attacks dataset, printed 2d masks dataset, spoofing in 2D face recognition, facial masks, 2D face recognition systems, detecting face spoofing attacks, detecting presentation attacks, computer vision, surveillance face anti-spoofing, face liveness detection software solution, silicone masks attacks*" ]
[ "TAGS\n#task_categories-video-classification #task_categories-image-classification #task_categories-image-to-image #language-English #license-cc-by-nc-nd-4.0 #code #finance #biology #legal #region-us \n", "# Web Camera Face Liveness Detection\n\nThe dataset consists of videos featuring individuals wearing various types of masks. Videos are recorded under different lighting conditions and with different attributes (*glasses, masks, hats, hoods, wigs, and mustaches for men*).\n\nIn the dataset, there are 7 types of videos filmed on a web camera:\n\n- Silicone Mask - demonstration of a silicone mask attack (*silicone*)\n- 2D mask with holes for eyes - demonstration of an attack with a paper/cardboard mask (*mask*)\n- 2D mask - demonstration of an attack with a paper/cardboard silhouette (*outline*)\n- Monitor Replay Attack - demonstration of an attack from a monitor (*monitor *)\n- A4 Photo Attack - demonstration of a paper/cardboard A4 photo attack (*print*)\n- A4 Photo with holes for eyes, nose and mouth - demonstration of a paper/cardboard A4 photo attack with cutouts for the eyes, nose, and mouth (*print_cut*)\n- Real Video - demonstration of a real person's face (*real*)\n\n![](URL\n\nThe dataset allows researchers and developers in recognizing and analyzing facial expressions, anti-spoofing tasks, face detection, re-identification and face recognition tasks. The inclusion of various attributes and different lighting conditions aims to enhance the robustness and effectiveness of anti-spoofing models in real-world scenarios.", "## Full version of the dataset includes 30,000+ videos of people, leave a request on TrainingData to buy the dataset", "### Statistics for the dataset (gender and type of the attack):\n\n![](URL", "# Get the Dataset", "## This is just an example of the data \nLeave a request on URL to learn about the price and buy the dataset", "# Content\nThe folder files includes:\n- mask - includes videos of people wearing a 2D mask with holes for eyes,\n- monitor - includes videos with demonstration of an attack from a monitor,\n- outline - includes videos of people wearing a 2D mask,\n- print - includes videos of people with an A4 photo,\n- print_cut - includes videos of people with an A4 photo with holes for eyes, nose and mouth,\n- real - includes real videos of people,\n- silicone - includes videos of people wearing a silicone mask", "### File with the extension .csv\n- file: link to access the file,\n- type: type of the video (*real, mask, outline, print, print_cut, silicone, monitor*)", "## TrainingData provides high-quality data annotation tailored to your needs\n\nMore datasets in TrainingData's Kaggle account: <URL\n\nTrainingData's GitHub: URL\n\n*keywords: ibeta level 1, ibeta level 2, liveness detection systems, liveness detection dataset, biometric dataset, biometric data dataset, biometric system attacks, anti-spoofing dataset, face liveness detection, deep learning dataset, face spoofing database, face anti-spoofing, face recognition, face detection, face identification, human video dataset, video dataset, presentation attack detection, presentation attack dataset, 2d print attacks, print 2d attacks dataset, printed 2d masks dataset, spoofing in 2D face recognition, facial masks, 2D face recognition systems, detecting face spoofing attacks, detecting presentation attacks, computer vision, surveillance face anti-spoofing, face liveness detection software solution, silicone masks attacks*" ]
[ 67, 328, 26, 22, 5, 24, 112, 44, 232 ]
[ "passage: TAGS\n#task_categories-video-classification #task_categories-image-classification #task_categories-image-to-image #language-English #license-cc-by-nc-nd-4.0 #code #finance #biology #legal #region-us \n# Web Camera Face Liveness Detection\n\nThe dataset consists of videos featuring individuals wearing various types of masks. Videos are recorded under different lighting conditions and with different attributes (*glasses, masks, hats, hoods, wigs, and mustaches for men*).\n\nIn the dataset, there are 7 types of videos filmed on a web camera:\n\n- Silicone Mask - demonstration of a silicone mask attack (*silicone*)\n- 2D mask with holes for eyes - demonstration of an attack with a paper/cardboard mask (*mask*)\n- 2D mask - demonstration of an attack with a paper/cardboard silhouette (*outline*)\n- Monitor Replay Attack - demonstration of an attack from a monitor (*monitor *)\n- A4 Photo Attack - demonstration of a paper/cardboard A4 photo attack (*print*)\n- A4 Photo with holes for eyes, nose and mouth - demonstration of a paper/cardboard A4 photo attack with cutouts for the eyes, nose, and mouth (*print_cut*)\n- Real Video - demonstration of a real person's face (*real*)\n\n![](URL\n\nThe dataset allows researchers and developers in recognizing and analyzing facial expressions, anti-spoofing tasks, face detection, re-identification and face recognition tasks. The inclusion of various attributes and different lighting conditions aims to enhance the robustness and effectiveness of anti-spoofing models in real-world scenarios.## Full version of the dataset includes 30,000+ videos of people, leave a request on TrainingData to buy the dataset### Statistics for the dataset (gender and type of the attack):\n\n![](URL# Get the Dataset## This is just an example of the data \nLeave a request on URL to learn about the price and buy the dataset" ]
cc6e2a1cee8eb1019b6b4a5bbfd3d55e34d67011
# Dataset Card for Evaluation run of bit-dny/MindLLM <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [bit-dny/MindLLM](https://huggingface.co/bit-dny/MindLLM) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_bit-dny__MindLLM", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-27T12:33:52.223530](https://huggingface.co/datasets/open-llm-leaderboard/details_bit-dny__MindLLM/blob/main/results_2023-12-27T12-33-52.223530.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.2547315459012399, "acc_stderr": 0.030757121924893716, "acc_norm": 0.2559855532831359, "acc_norm_stderr": 0.03153175700940631, "mc1": 0.26193390452876375, "mc1_stderr": 0.015392118805015025, "mc2": 0.43479871223663846, "mc2_stderr": 0.015180815930542027 }, "harness|arc:challenge|25": { "acc": 0.19539249146757678, "acc_stderr": 0.011586907189952911, "acc_norm": 0.22440273037542663, "acc_norm_stderr": 0.012191404938603838 }, "harness|hellaswag|10": { "acc": 0.30392352121091415, "acc_stderr": 0.004590100050198833, "acc_norm": 0.34106751643098987, "acc_norm_stderr": 0.004730991357194287 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.23, "acc_stderr": 0.042295258468165044, "acc_norm": 0.23, "acc_norm_stderr": 0.042295258468165044 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.24444444444444444, "acc_stderr": 0.03712537833614866, "acc_norm": 0.24444444444444444, "acc_norm_stderr": 0.03712537833614866 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.20394736842105263, "acc_stderr": 0.0327900040631005, "acc_norm": 0.20394736842105263, "acc_norm_stderr": 0.0327900040631005 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.26, "acc_stderr": 0.044084400227680794, "acc_norm": 0.26, "acc_norm_stderr": 0.044084400227680794 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.27169811320754716, "acc_stderr": 0.027377706624670713, "acc_norm": 0.27169811320754716, "acc_norm_stderr": 0.027377706624670713 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.2916666666666667, "acc_stderr": 0.03800968060554858, "acc_norm": 0.2916666666666667, "acc_norm_stderr": 0.03800968060554858 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.2, "acc_stderr": 0.040201512610368445, "acc_norm": 0.2, "acc_norm_stderr": 0.040201512610368445 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.37, "acc_stderr": 0.04852365870939099, "acc_norm": 0.37, "acc_norm_stderr": 0.04852365870939099 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.35, "acc_stderr": 0.047937248544110196, "acc_norm": 0.35, "acc_norm_stderr": 0.047937248544110196 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.23121387283236994, "acc_stderr": 0.03214737302029471, "acc_norm": 0.23121387283236994, "acc_norm_stderr": 0.03214737302029471 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.17647058823529413, "acc_stderr": 0.0379328118530781, "acc_norm": 0.17647058823529413, "acc_norm_stderr": 0.0379328118530781 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.18, "acc_stderr": 0.038612291966536955, "acc_norm": 0.18, "acc_norm_stderr": 0.038612291966536955 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.2851063829787234, "acc_stderr": 0.02951319662553935, "acc_norm": 0.2851063829787234, "acc_norm_stderr": 0.02951319662553935 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.2631578947368421, "acc_stderr": 0.041424397194893624, "acc_norm": 0.2631578947368421, "acc_norm_stderr": 0.041424397194893624 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.27586206896551724, "acc_stderr": 0.037245636197746325, "acc_norm": 0.27586206896551724, "acc_norm_stderr": 0.037245636197746325 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.26455026455026454, "acc_stderr": 0.022717467897708617, "acc_norm": 0.26455026455026454, "acc_norm_stderr": 0.022717467897708617 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.15079365079365079, "acc_stderr": 0.03200686497287394, "acc_norm": 0.15079365079365079, "acc_norm_stderr": 0.03200686497287394 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.31, "acc_stderr": 0.04648231987117316, "acc_norm": 0.31, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.23870967741935484, "acc_stderr": 0.024251071262208837, "acc_norm": 0.23870967741935484, "acc_norm_stderr": 0.024251071262208837 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.2955665024630542, "acc_stderr": 0.032104944337514575, "acc_norm": 0.2955665024630542, "acc_norm_stderr": 0.032104944337514575 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.32, "acc_stderr": 0.04688261722621504, "acc_norm": 0.32, "acc_norm_stderr": 0.04688261722621504 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.21818181818181817, "acc_stderr": 0.03225078108306289, "acc_norm": 0.21818181818181817, "acc_norm_stderr": 0.03225078108306289 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.2828282828282828, "acc_stderr": 0.03208779558786752, "acc_norm": 0.2828282828282828, "acc_norm_stderr": 0.03208779558786752 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.35233160621761656, "acc_stderr": 0.034474782864143586, "acc_norm": 0.35233160621761656, "acc_norm_stderr": 0.034474782864143586 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.2846153846153846, "acc_stderr": 0.0228783227997063, "acc_norm": 0.2846153846153846, "acc_norm_stderr": 0.0228783227997063 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.27037037037037037, "acc_stderr": 0.027080372815145668, "acc_norm": 0.27037037037037037, "acc_norm_stderr": 0.027080372815145668 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.24369747899159663, "acc_stderr": 0.02788682807838057, "acc_norm": 0.24369747899159663, "acc_norm_stderr": 0.02788682807838057 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.271523178807947, "acc_stderr": 0.03631329803969653, "acc_norm": 0.271523178807947, "acc_norm_stderr": 0.03631329803969653 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.3100917431192661, "acc_stderr": 0.019830849684439742, "acc_norm": 0.3100917431192661, "acc_norm_stderr": 0.019830849684439742 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.4351851851851852, "acc_stderr": 0.033812000056435254, "acc_norm": 0.4351851851851852, "acc_norm_stderr": 0.033812000056435254 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.25, "acc_stderr": 0.03039153369274154, "acc_norm": 0.25, "acc_norm_stderr": 0.03039153369274154 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.270042194092827, "acc_stderr": 0.028900721906293426, "acc_norm": 0.270042194092827, "acc_norm_stderr": 0.028900721906293426 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.17040358744394618, "acc_stderr": 0.025234593447136165, "acc_norm": 0.17040358744394618, "acc_norm_stderr": 0.025234593447136165 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.22137404580152673, "acc_stderr": 0.036412970813137276, "acc_norm": 0.22137404580152673, "acc_norm_stderr": 0.036412970813137276 }, "harness|hendrycksTest-international_law|5": { "acc": 0.19834710743801653, "acc_stderr": 0.03640118271990945, "acc_norm": 0.19834710743801653, "acc_norm_stderr": 0.03640118271990945 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.2777777777777778, "acc_stderr": 0.043300437496507416, "acc_norm": 0.2777777777777778, "acc_norm_stderr": 0.043300437496507416 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.3006134969325153, "acc_stderr": 0.03602511318806771, "acc_norm": 0.3006134969325153, "acc_norm_stderr": 0.03602511318806771 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.2767857142857143, "acc_stderr": 0.042466243366976256, "acc_norm": 0.2767857142857143, "acc_norm_stderr": 0.042466243366976256 }, "harness|hendrycksTest-management|5": { "acc": 0.23300970873786409, "acc_stderr": 0.04185832598928315, "acc_norm": 0.23300970873786409, "acc_norm_stderr": 0.04185832598928315 }, "harness|hendrycksTest-marketing|5": { "acc": 0.24786324786324787, "acc_stderr": 0.028286324075564407, "acc_norm": 0.24786324786324787, "acc_norm_stderr": 0.028286324075564407 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.22, "acc_stderr": 0.041633319989322695, "acc_norm": 0.22, "acc_norm_stderr": 0.041633319989322695 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.26309067688378035, "acc_stderr": 0.015745497169049053, "acc_norm": 0.26309067688378035, "acc_norm_stderr": 0.015745497169049053 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.20809248554913296, "acc_stderr": 0.0218552552634218, "acc_norm": 0.20809248554913296, "acc_norm_stderr": 0.0218552552634218 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.24692737430167597, "acc_stderr": 0.014422292204808835, "acc_norm": 0.24692737430167597, "acc_norm_stderr": 0.014422292204808835 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.21568627450980393, "acc_stderr": 0.02355083135199509, "acc_norm": 0.21568627450980393, "acc_norm_stderr": 0.02355083135199509 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.2733118971061093, "acc_stderr": 0.02531176597542611, "acc_norm": 0.2733118971061093, "acc_norm_stderr": 0.02531176597542611 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.2222222222222222, "acc_stderr": 0.02313237623454334, "acc_norm": 0.2222222222222222, "acc_norm_stderr": 0.02313237623454334 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.2730496453900709, "acc_stderr": 0.026577860943307857, "acc_norm": 0.2730496453900709, "acc_norm_stderr": 0.026577860943307857 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.2457627118644068, "acc_stderr": 0.010996156635142692, "acc_norm": 0.2457627118644068, "acc_norm_stderr": 0.010996156635142692 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.22794117647058823, "acc_stderr": 0.025483081468029804, "acc_norm": 0.22794117647058823, "acc_norm_stderr": 0.025483081468029804 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.2777777777777778, "acc_stderr": 0.01812022425148458, "acc_norm": 0.2777777777777778, "acc_norm_stderr": 0.01812022425148458 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.20909090909090908, "acc_stderr": 0.038950910157241364, "acc_norm": 0.20909090909090908, "acc_norm_stderr": 0.038950910157241364 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.18775510204081633, "acc_stderr": 0.02500025603954621, "acc_norm": 0.18775510204081633, "acc_norm_stderr": 0.02500025603954621 }, "harness|hendrycksTest-sociology|5": { "acc": 0.263681592039801, "acc_stderr": 0.031157150869355558, "acc_norm": 0.263681592039801, "acc_norm_stderr": 0.031157150869355558 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.2, "acc_stderr": 0.040201512610368466, "acc_norm": 0.2, "acc_norm_stderr": 0.040201512610368466 }, "harness|hendrycksTest-virology|5": { "acc": 0.2469879518072289, "acc_stderr": 0.03357351982064536, "acc_norm": 0.2469879518072289, "acc_norm_stderr": 0.03357351982064536 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.22807017543859648, "acc_stderr": 0.03218093795602357, "acc_norm": 0.22807017543859648, "acc_norm_stderr": 0.03218093795602357 }, "harness|truthfulqa:mc|0": { "mc1": 0.26193390452876375, "mc1_stderr": 0.015392118805015025, "mc2": 0.43479871223663846, "mc2_stderr": 0.015180815930542027 }, "harness|winogrande|5": { "acc": 0.49329123914759276, "acc_stderr": 0.014051220692330346 }, "harness|gsm8k|5": { "acc": 0.008339651250947688, "acc_stderr": 0.002504942226860537 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_bit-dny__MindLLM
[ "region:us" ]
2023-12-27T12:35:31+00:00
{"pretty_name": "Evaluation run of bit-dny/MindLLM", "dataset_summary": "Dataset automatically created during the evaluation run of model [bit-dny/MindLLM](https://huggingface.co/bit-dny/MindLLM) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_bit-dny__MindLLM\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-27T12:33:52.223530](https://huggingface.co/datasets/open-llm-leaderboard/details_bit-dny__MindLLM/blob/main/results_2023-12-27T12-33-52.223530.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.2547315459012399,\n \"acc_stderr\": 0.030757121924893716,\n \"acc_norm\": 0.2559855532831359,\n \"acc_norm_stderr\": 0.03153175700940631,\n \"mc1\": 0.26193390452876375,\n \"mc1_stderr\": 0.015392118805015025,\n \"mc2\": 0.43479871223663846,\n \"mc2_stderr\": 0.015180815930542027\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.19539249146757678,\n \"acc_stderr\": 0.011586907189952911,\n \"acc_norm\": 0.22440273037542663,\n \"acc_norm_stderr\": 0.012191404938603838\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.30392352121091415,\n \"acc_stderr\": 0.004590100050198833,\n \"acc_norm\": 0.34106751643098987,\n \"acc_norm_stderr\": 0.004730991357194287\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.23,\n \"acc_stderr\": 0.042295258468165044,\n \"acc_norm\": 0.23,\n \"acc_norm_stderr\": 0.042295258468165044\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.24444444444444444,\n \"acc_stderr\": 0.03712537833614866,\n \"acc_norm\": 0.24444444444444444,\n \"acc_norm_stderr\": 0.03712537833614866\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.20394736842105263,\n \"acc_stderr\": 0.0327900040631005,\n \"acc_norm\": 0.20394736842105263,\n \"acc_norm_stderr\": 0.0327900040631005\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.044084400227680794,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.044084400227680794\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.27169811320754716,\n \"acc_stderr\": 0.027377706624670713,\n \"acc_norm\": 0.27169811320754716,\n \"acc_norm_stderr\": 0.027377706624670713\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2916666666666667,\n \"acc_stderr\": 0.03800968060554858,\n \"acc_norm\": 0.2916666666666667,\n \"acc_norm_stderr\": 0.03800968060554858\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.2,\n \"acc_stderr\": 0.040201512610368445,\n \"acc_norm\": 0.2,\n \"acc_norm_stderr\": 0.040201512610368445\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.23121387283236994,\n \"acc_stderr\": 0.03214737302029471,\n \"acc_norm\": 0.23121387283236994,\n \"acc_norm_stderr\": 0.03214737302029471\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.17647058823529413,\n \"acc_stderr\": 0.0379328118530781,\n \"acc_norm\": 0.17647058823529413,\n \"acc_norm_stderr\": 0.0379328118530781\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.18,\n \"acc_stderr\": 0.038612291966536955,\n \"acc_norm\": 0.18,\n \"acc_norm_stderr\": 0.038612291966536955\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.2851063829787234,\n \"acc_stderr\": 0.02951319662553935,\n \"acc_norm\": 0.2851063829787234,\n \"acc_norm_stderr\": 0.02951319662553935\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2631578947368421,\n \"acc_stderr\": 0.041424397194893624,\n \"acc_norm\": 0.2631578947368421,\n \"acc_norm_stderr\": 0.041424397194893624\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.27586206896551724,\n \"acc_stderr\": 0.037245636197746325,\n \"acc_norm\": 0.27586206896551724,\n \"acc_norm_stderr\": 0.037245636197746325\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.26455026455026454,\n \"acc_stderr\": 0.022717467897708617,\n \"acc_norm\": 0.26455026455026454,\n \"acc_norm_stderr\": 0.022717467897708617\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.15079365079365079,\n \"acc_stderr\": 0.03200686497287394,\n \"acc_norm\": 0.15079365079365079,\n \"acc_norm_stderr\": 0.03200686497287394\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.23870967741935484,\n \"acc_stderr\": 0.024251071262208837,\n \"acc_norm\": 0.23870967741935484,\n \"acc_norm_stderr\": 0.024251071262208837\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.2955665024630542,\n \"acc_stderr\": 0.032104944337514575,\n \"acc_norm\": 0.2955665024630542,\n \"acc_norm_stderr\": 0.032104944337514575\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.21818181818181817,\n \"acc_stderr\": 0.03225078108306289,\n \"acc_norm\": 0.21818181818181817,\n \"acc_norm_stderr\": 0.03225078108306289\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.2828282828282828,\n \"acc_stderr\": 0.03208779558786752,\n \"acc_norm\": 0.2828282828282828,\n \"acc_norm_stderr\": 0.03208779558786752\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.35233160621761656,\n \"acc_stderr\": 0.034474782864143586,\n \"acc_norm\": 0.35233160621761656,\n \"acc_norm_stderr\": 0.034474782864143586\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.2846153846153846,\n \"acc_stderr\": 0.0228783227997063,\n \"acc_norm\": 0.2846153846153846,\n \"acc_norm_stderr\": 0.0228783227997063\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.27037037037037037,\n \"acc_stderr\": 0.027080372815145668,\n \"acc_norm\": 0.27037037037037037,\n \"acc_norm_stderr\": 0.027080372815145668\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.24369747899159663,\n \"acc_stderr\": 0.02788682807838057,\n \"acc_norm\": 0.24369747899159663,\n \"acc_norm_stderr\": 0.02788682807838057\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.271523178807947,\n \"acc_stderr\": 0.03631329803969653,\n \"acc_norm\": 0.271523178807947,\n \"acc_norm_stderr\": 0.03631329803969653\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.3100917431192661,\n \"acc_stderr\": 0.019830849684439742,\n \"acc_norm\": 0.3100917431192661,\n \"acc_norm_stderr\": 0.019830849684439742\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4351851851851852,\n \"acc_stderr\": 0.033812000056435254,\n \"acc_norm\": 0.4351851851851852,\n \"acc_norm_stderr\": 0.033812000056435254\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.03039153369274154,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.03039153369274154\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.270042194092827,\n \"acc_stderr\": 0.028900721906293426,\n \"acc_norm\": 0.270042194092827,\n \"acc_norm_stderr\": 0.028900721906293426\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.17040358744394618,\n \"acc_stderr\": 0.025234593447136165,\n \"acc_norm\": 0.17040358744394618,\n \"acc_norm_stderr\": 0.025234593447136165\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.22137404580152673,\n \"acc_stderr\": 0.036412970813137276,\n \"acc_norm\": 0.22137404580152673,\n \"acc_norm_stderr\": 0.036412970813137276\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.19834710743801653,\n \"acc_stderr\": 0.03640118271990945,\n \"acc_norm\": 0.19834710743801653,\n \"acc_norm_stderr\": 0.03640118271990945\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.2777777777777778,\n \"acc_stderr\": 0.043300437496507416,\n \"acc_norm\": 0.2777777777777778,\n \"acc_norm_stderr\": 0.043300437496507416\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.3006134969325153,\n \"acc_stderr\": 0.03602511318806771,\n \"acc_norm\": 0.3006134969325153,\n \"acc_norm_stderr\": 0.03602511318806771\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.2767857142857143,\n \"acc_stderr\": 0.042466243366976256,\n \"acc_norm\": 0.2767857142857143,\n \"acc_norm_stderr\": 0.042466243366976256\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.23300970873786409,\n \"acc_stderr\": 0.04185832598928315,\n \"acc_norm\": 0.23300970873786409,\n \"acc_norm_stderr\": 0.04185832598928315\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.24786324786324787,\n \"acc_stderr\": 0.028286324075564407,\n \"acc_norm\": 0.24786324786324787,\n \"acc_norm_stderr\": 0.028286324075564407\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.22,\n \"acc_stderr\": 0.041633319989322695,\n \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.041633319989322695\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.26309067688378035,\n \"acc_stderr\": 0.015745497169049053,\n \"acc_norm\": 0.26309067688378035,\n \"acc_norm_stderr\": 0.015745497169049053\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.20809248554913296,\n \"acc_stderr\": 0.0218552552634218,\n \"acc_norm\": 0.20809248554913296,\n \"acc_norm_stderr\": 0.0218552552634218\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.24692737430167597,\n \"acc_stderr\": 0.014422292204808835,\n \"acc_norm\": 0.24692737430167597,\n \"acc_norm_stderr\": 0.014422292204808835\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.21568627450980393,\n \"acc_stderr\": 0.02355083135199509,\n \"acc_norm\": 0.21568627450980393,\n \"acc_norm_stderr\": 0.02355083135199509\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.2733118971061093,\n \"acc_stderr\": 0.02531176597542611,\n \"acc_norm\": 0.2733118971061093,\n \"acc_norm_stderr\": 0.02531176597542611\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.2222222222222222,\n \"acc_stderr\": 0.02313237623454334,\n \"acc_norm\": 0.2222222222222222,\n \"acc_norm_stderr\": 0.02313237623454334\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.2730496453900709,\n \"acc_stderr\": 0.026577860943307857,\n \"acc_norm\": 0.2730496453900709,\n \"acc_norm_stderr\": 0.026577860943307857\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.2457627118644068,\n \"acc_stderr\": 0.010996156635142692,\n \"acc_norm\": 0.2457627118644068,\n \"acc_norm_stderr\": 0.010996156635142692\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.22794117647058823,\n \"acc_stderr\": 0.025483081468029804,\n \"acc_norm\": 0.22794117647058823,\n \"acc_norm_stderr\": 0.025483081468029804\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.2777777777777778,\n \"acc_stderr\": 0.01812022425148458,\n \"acc_norm\": 0.2777777777777778,\n \"acc_norm_stderr\": 0.01812022425148458\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.20909090909090908,\n \"acc_stderr\": 0.038950910157241364,\n \"acc_norm\": 0.20909090909090908,\n \"acc_norm_stderr\": 0.038950910157241364\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.18775510204081633,\n \"acc_stderr\": 0.02500025603954621,\n \"acc_norm\": 0.18775510204081633,\n \"acc_norm_stderr\": 0.02500025603954621\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.263681592039801,\n \"acc_stderr\": 0.031157150869355558,\n \"acc_norm\": 0.263681592039801,\n \"acc_norm_stderr\": 0.031157150869355558\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.2,\n \"acc_stderr\": 0.040201512610368466,\n \"acc_norm\": 0.2,\n \"acc_norm_stderr\": 0.040201512610368466\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.2469879518072289,\n \"acc_stderr\": 0.03357351982064536,\n \"acc_norm\": 0.2469879518072289,\n \"acc_norm_stderr\": 0.03357351982064536\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.22807017543859648,\n \"acc_stderr\": 0.03218093795602357,\n \"acc_norm\": 0.22807017543859648,\n \"acc_norm_stderr\": 0.03218093795602357\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.26193390452876375,\n \"mc1_stderr\": 0.015392118805015025,\n \"mc2\": 0.43479871223663846,\n \"mc2_stderr\": 0.015180815930542027\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.49329123914759276,\n \"acc_stderr\": 0.014051220692330346\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.008339651250947688,\n \"acc_stderr\": 0.002504942226860537\n }\n}\n```", "repo_url": "https://huggingface.co/bit-dny/MindLLM", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|arc:challenge|25_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|gsm8k|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hellaswag|10_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-27T12-33-52.223530.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["**/details_harness|winogrande|5_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-27T12-33-52.223530.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_27T12_33_52.223530", "path": ["results_2023-12-27T12-33-52.223530.parquet"]}, {"split": "latest", "path": ["results_2023-12-27T12-33-52.223530.parquet"]}]}]}
2023-12-27T12:35:58+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of bit-dny/MindLLM Dataset automatically created during the evaluation run of model bit-dny/MindLLM on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-27T12:33:52.223530(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of bit-dny/MindLLM\n\n\n\nDataset automatically created during the evaluation run of model bit-dny/MindLLM on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-27T12:33:52.223530(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of bit-dny/MindLLM\n\n\n\nDataset automatically created during the evaluation run of model bit-dny/MindLLM on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-27T12:33:52.223530(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 177, 67, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of bit-dny/MindLLM\n\n\n\nDataset automatically created during the evaluation run of model bit-dny/MindLLM on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-27T12:33:52.223530(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
1f20c99d0a44c961e4b7efd48c95d23661bcd483
# Dataset Card for Evaluation run of Walmart-the-bag/Yi-6B-Infinity-Chat <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [Walmart-the-bag/Yi-6B-Infinity-Chat](https://huggingface.co/Walmart-the-bag/Yi-6B-Infinity-Chat) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Walmart-the-bag__Yi-6B-Infinity-Chat", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-27T12:43:24.987428](https://huggingface.co/datasets/open-llm-leaderboard/details_Walmart-the-bag__Yi-6B-Infinity-Chat/blob/main/results_2023-12-27T12-43-24.987428.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6349062046421136, "acc_stderr": 0.03216959197766624, "acc_norm": 0.641573735628457, "acc_norm_stderr": 0.03281243961732792, "mc1": 0.35128518971848227, "mc1_stderr": 0.0167113581635444, "mc2": 0.5074966311746293, "mc2_stderr": 0.01566752376256644 }, "harness|arc:challenge|25": { "acc": 0.5315699658703071, "acc_stderr": 0.014582236460866978, "acc_norm": 0.5656996587030717, "acc_norm_stderr": 0.01448470304885736 }, "harness|hellaswag|10": { "acc": 0.5875323640709023, "acc_stderr": 0.004912723848944794, "acc_norm": 0.7766381198964349, "acc_norm_stderr": 0.0041564771409096125 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.34, "acc_stderr": 0.04760952285695235, "acc_norm": 0.34, "acc_norm_stderr": 0.04760952285695235 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6444444444444445, "acc_stderr": 0.04135176749720385, "acc_norm": 0.6444444444444445, "acc_norm_stderr": 0.04135176749720385 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.7105263157894737, "acc_stderr": 0.03690677986137282, "acc_norm": 0.7105263157894737, "acc_norm_stderr": 0.03690677986137282 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.69, "acc_stderr": 0.04648231987117316, "acc_norm": 0.69, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6943396226415094, "acc_stderr": 0.028353298073322663, "acc_norm": 0.6943396226415094, "acc_norm_stderr": 0.028353298073322663 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.6736111111111112, "acc_stderr": 0.03921067198982266, "acc_norm": 0.6736111111111112, "acc_norm_stderr": 0.03921067198982266 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.43, "acc_stderr": 0.049756985195624284, "acc_norm": 0.43, "acc_norm_stderr": 0.049756985195624284 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.55, "acc_stderr": 0.05, "acc_norm": 0.55, "acc_norm_stderr": 0.05 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6647398843930635, "acc_stderr": 0.03599586301247077, "acc_norm": 0.6647398843930635, "acc_norm_stderr": 0.03599586301247077 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.37254901960784315, "acc_stderr": 0.04810840148082636, "acc_norm": 0.37254901960784315, "acc_norm_stderr": 0.04810840148082636 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.77, "acc_stderr": 0.04229525846816507, "acc_norm": 0.77, "acc_norm_stderr": 0.04229525846816507 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5957446808510638, "acc_stderr": 0.03208115750788684, "acc_norm": 0.5957446808510638, "acc_norm_stderr": 0.03208115750788684 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.3684210526315789, "acc_stderr": 0.04537815354939392, "acc_norm": 0.3684210526315789, "acc_norm_stderr": 0.04537815354939392 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.6275862068965518, "acc_stderr": 0.04028731532947558, "acc_norm": 0.6275862068965518, "acc_norm_stderr": 0.04028731532947558 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.4365079365079365, "acc_stderr": 0.02554284681740048, "acc_norm": 0.4365079365079365, "acc_norm_stderr": 0.02554284681740048 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.42063492063492064, "acc_stderr": 0.04415438226743744, "acc_norm": 0.42063492063492064, "acc_norm_stderr": 0.04415438226743744 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.39, "acc_stderr": 0.04902071300001974, "acc_norm": 0.39, "acc_norm_stderr": 0.04902071300001974 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7774193548387097, "acc_stderr": 0.02366421667164251, "acc_norm": 0.7774193548387097, "acc_norm_stderr": 0.02366421667164251 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5024630541871922, "acc_stderr": 0.03517945038691063, "acc_norm": 0.5024630541871922, "acc_norm_stderr": 0.03517945038691063 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.67, "acc_stderr": 0.04725815626252607, "acc_norm": 0.67, "acc_norm_stderr": 0.04725815626252607 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7515151515151515, "acc_stderr": 0.03374402644139403, "acc_norm": 0.7515151515151515, "acc_norm_stderr": 0.03374402644139403 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.8282828282828283, "acc_stderr": 0.02686971618742991, "acc_norm": 0.8282828282828283, "acc_norm_stderr": 0.02686971618742991 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.9015544041450777, "acc_stderr": 0.021500249576033456, "acc_norm": 0.9015544041450777, "acc_norm_stderr": 0.021500249576033456 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6641025641025641, "acc_stderr": 0.023946724741563966, "acc_norm": 0.6641025641025641, "acc_norm_stderr": 0.023946724741563966 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.3074074074074074, "acc_stderr": 0.028133252578815635, "acc_norm": 0.3074074074074074, "acc_norm_stderr": 0.028133252578815635 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.7773109243697479, "acc_stderr": 0.027025433498882385, "acc_norm": 0.7773109243697479, "acc_norm_stderr": 0.027025433498882385 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.36423841059602646, "acc_stderr": 0.03929111781242741, "acc_norm": 0.36423841059602646, "acc_norm_stderr": 0.03929111781242741 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8366972477064221, "acc_stderr": 0.01584825580650155, "acc_norm": 0.8366972477064221, "acc_norm_stderr": 0.01584825580650155 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5694444444444444, "acc_stderr": 0.03376922151252335, "acc_norm": 0.5694444444444444, "acc_norm_stderr": 0.03376922151252335 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7745098039215687, "acc_stderr": 0.029331162294251735, "acc_norm": 0.7745098039215687, "acc_norm_stderr": 0.029331162294251735 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.8059071729957806, "acc_stderr": 0.025744902532290923, "acc_norm": 0.8059071729957806, "acc_norm_stderr": 0.025744902532290923 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6591928251121076, "acc_stderr": 0.0318114974705536, "acc_norm": 0.6591928251121076, "acc_norm_stderr": 0.0318114974705536 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7175572519083969, "acc_stderr": 0.03948406125768362, "acc_norm": 0.7175572519083969, "acc_norm_stderr": 0.03948406125768362 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7355371900826446, "acc_stderr": 0.040261875275912046, "acc_norm": 0.7355371900826446, "acc_norm_stderr": 0.040261875275912046 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7962962962962963, "acc_stderr": 0.03893542518824847, "acc_norm": 0.7962962962962963, "acc_norm_stderr": 0.03893542518824847 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7423312883435583, "acc_stderr": 0.03436150827846917, "acc_norm": 0.7423312883435583, "acc_norm_stderr": 0.03436150827846917 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.45535714285714285, "acc_stderr": 0.04726835553719099, "acc_norm": 0.45535714285714285, "acc_norm_stderr": 0.04726835553719099 }, "harness|hendrycksTest-management|5": { "acc": 0.8349514563106796, "acc_stderr": 0.036756688322331886, "acc_norm": 0.8349514563106796, "acc_norm_stderr": 0.036756688322331886 }, "harness|hendrycksTest-marketing|5": { "acc": 0.9017094017094017, "acc_stderr": 0.019503444900757567, "acc_norm": 0.9017094017094017, "acc_norm_stderr": 0.019503444900757567 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.74, "acc_stderr": 0.0440844002276808, "acc_norm": 0.74, "acc_norm_stderr": 0.0440844002276808 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8033205619412516, "acc_stderr": 0.014214138556913915, "acc_norm": 0.8033205619412516, "acc_norm_stderr": 0.014214138556913915 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.708092485549133, "acc_stderr": 0.024476994076247326, "acc_norm": 0.708092485549133, "acc_norm_stderr": 0.024476994076247326 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.43910614525139663, "acc_stderr": 0.016598022120580425, "acc_norm": 0.43910614525139663, "acc_norm_stderr": 0.016598022120580425 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7124183006535948, "acc_stderr": 0.02591780611714716, "acc_norm": 0.7124183006535948, "acc_norm_stderr": 0.02591780611714716 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.6784565916398714, "acc_stderr": 0.026527724079528872, "acc_norm": 0.6784565916398714, "acc_norm_stderr": 0.026527724079528872 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.6697530864197531, "acc_stderr": 0.026168298456732846, "acc_norm": 0.6697530864197531, "acc_norm_stderr": 0.026168298456732846 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.4929078014184397, "acc_stderr": 0.02982449855912901, "acc_norm": 0.4929078014184397, "acc_norm_stderr": 0.02982449855912901 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4641460234680574, "acc_stderr": 0.012737361318730583, "acc_norm": 0.4641460234680574, "acc_norm_stderr": 0.012737361318730583 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6875, "acc_stderr": 0.02815637344037142, "acc_norm": 0.6875, "acc_norm_stderr": 0.02815637344037142 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6372549019607843, "acc_stderr": 0.019450768432505514, "acc_norm": 0.6372549019607843, "acc_norm_stderr": 0.019450768432505514 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.7090909090909091, "acc_stderr": 0.04350271442923243, "acc_norm": 0.7090909090909091, "acc_norm_stderr": 0.04350271442923243 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7591836734693878, "acc_stderr": 0.02737294220178816, "acc_norm": 0.7591836734693878, "acc_norm_stderr": 0.02737294220178816 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8258706467661692, "acc_stderr": 0.026814951200421606, "acc_norm": 0.8258706467661692, "acc_norm_stderr": 0.026814951200421606 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.86, "acc_stderr": 0.0348735088019777, "acc_norm": 0.86, "acc_norm_stderr": 0.0348735088019777 }, "harness|hendrycksTest-virology|5": { "acc": 0.45180722891566266, "acc_stderr": 0.03874371556587953, "acc_norm": 0.45180722891566266, "acc_norm_stderr": 0.03874371556587953 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8187134502923976, "acc_stderr": 0.02954774168764004, "acc_norm": 0.8187134502923976, "acc_norm_stderr": 0.02954774168764004 }, "harness|truthfulqa:mc|0": { "mc1": 0.35128518971848227, "mc1_stderr": 0.0167113581635444, "mc2": 0.5074966311746293, "mc2_stderr": 0.01566752376256644 }, "harness|winogrande|5": { "acc": 0.739542225730071, "acc_stderr": 0.012334833671998285 }, "harness|gsm8k|5": { "acc": 0.3601213040181956, "acc_stderr": 0.013222559423250485 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_Walmart-the-bag__Yi-6B-Infinity-Chat
[ "region:us" ]
2023-12-27T12:45:38+00:00
{"pretty_name": "Evaluation run of Walmart-the-bag/Yi-6B-Infinity-Chat", "dataset_summary": "Dataset automatically created during the evaluation run of model [Walmart-the-bag/Yi-6B-Infinity-Chat](https://huggingface.co/Walmart-the-bag/Yi-6B-Infinity-Chat) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Walmart-the-bag__Yi-6B-Infinity-Chat\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-27T12:43:24.987428](https://huggingface.co/datasets/open-llm-leaderboard/details_Walmart-the-bag__Yi-6B-Infinity-Chat/blob/main/results_2023-12-27T12-43-24.987428.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6349062046421136,\n \"acc_stderr\": 0.03216959197766624,\n \"acc_norm\": 0.641573735628457,\n \"acc_norm_stderr\": 0.03281243961732792,\n \"mc1\": 0.35128518971848227,\n \"mc1_stderr\": 0.0167113581635444,\n \"mc2\": 0.5074966311746293,\n \"mc2_stderr\": 0.01566752376256644\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5315699658703071,\n \"acc_stderr\": 0.014582236460866978,\n \"acc_norm\": 0.5656996587030717,\n \"acc_norm_stderr\": 0.01448470304885736\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5875323640709023,\n \"acc_stderr\": 0.004912723848944794,\n \"acc_norm\": 0.7766381198964349,\n \"acc_norm_stderr\": 0.0041564771409096125\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6444444444444445,\n \"acc_stderr\": 0.04135176749720385,\n \"acc_norm\": 0.6444444444444445,\n \"acc_norm_stderr\": 0.04135176749720385\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7105263157894737,\n \"acc_stderr\": 0.03690677986137282,\n \"acc_norm\": 0.7105263157894737,\n \"acc_norm_stderr\": 0.03690677986137282\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6943396226415094,\n \"acc_stderr\": 0.028353298073322663,\n \"acc_norm\": 0.6943396226415094,\n \"acc_norm_stderr\": 0.028353298073322663\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6736111111111112,\n \"acc_stderr\": 0.03921067198982266,\n \"acc_norm\": 0.6736111111111112,\n \"acc_norm_stderr\": 0.03921067198982266\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.43,\n \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6647398843930635,\n \"acc_stderr\": 0.03599586301247077,\n \"acc_norm\": 0.6647398843930635,\n \"acc_norm_stderr\": 0.03599586301247077\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.37254901960784315,\n \"acc_stderr\": 0.04810840148082636,\n \"acc_norm\": 0.37254901960784315,\n \"acc_norm_stderr\": 0.04810840148082636\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.77,\n \"acc_stderr\": 0.04229525846816507,\n \"acc_norm\": 0.77,\n \"acc_norm_stderr\": 0.04229525846816507\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5957446808510638,\n \"acc_stderr\": 0.03208115750788684,\n \"acc_norm\": 0.5957446808510638,\n \"acc_norm_stderr\": 0.03208115750788684\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.3684210526315789,\n \"acc_stderr\": 0.04537815354939392,\n \"acc_norm\": 0.3684210526315789,\n \"acc_norm_stderr\": 0.04537815354939392\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6275862068965518,\n \"acc_stderr\": 0.04028731532947558,\n \"acc_norm\": 0.6275862068965518,\n \"acc_norm_stderr\": 0.04028731532947558\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4365079365079365,\n \"acc_stderr\": 0.02554284681740048,\n \"acc_norm\": 0.4365079365079365,\n \"acc_norm_stderr\": 0.02554284681740048\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.42063492063492064,\n \"acc_stderr\": 0.04415438226743744,\n \"acc_norm\": 0.42063492063492064,\n \"acc_norm_stderr\": 0.04415438226743744\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001974,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001974\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7774193548387097,\n \"acc_stderr\": 0.02366421667164251,\n \"acc_norm\": 0.7774193548387097,\n \"acc_norm_stderr\": 0.02366421667164251\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5024630541871922,\n \"acc_stderr\": 0.03517945038691063,\n \"acc_norm\": 0.5024630541871922,\n \"acc_norm_stderr\": 0.03517945038691063\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.67,\n \"acc_stderr\": 0.04725815626252607,\n \"acc_norm\": 0.67,\n \"acc_norm_stderr\": 0.04725815626252607\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7515151515151515,\n \"acc_stderr\": 0.03374402644139403,\n \"acc_norm\": 0.7515151515151515,\n \"acc_norm_stderr\": 0.03374402644139403\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8282828282828283,\n \"acc_stderr\": 0.02686971618742991,\n \"acc_norm\": 0.8282828282828283,\n \"acc_norm_stderr\": 0.02686971618742991\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9015544041450777,\n \"acc_stderr\": 0.021500249576033456,\n \"acc_norm\": 0.9015544041450777,\n \"acc_norm_stderr\": 0.021500249576033456\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6641025641025641,\n \"acc_stderr\": 0.023946724741563966,\n \"acc_norm\": 0.6641025641025641,\n \"acc_norm_stderr\": 0.023946724741563966\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3074074074074074,\n \"acc_stderr\": 0.028133252578815635,\n \"acc_norm\": 0.3074074074074074,\n \"acc_norm_stderr\": 0.028133252578815635\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.7773109243697479,\n \"acc_stderr\": 0.027025433498882385,\n \"acc_norm\": 0.7773109243697479,\n \"acc_norm_stderr\": 0.027025433498882385\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.36423841059602646,\n \"acc_stderr\": 0.03929111781242741,\n \"acc_norm\": 0.36423841059602646,\n \"acc_norm_stderr\": 0.03929111781242741\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8366972477064221,\n \"acc_stderr\": 0.01584825580650155,\n \"acc_norm\": 0.8366972477064221,\n \"acc_norm_stderr\": 0.01584825580650155\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5694444444444444,\n \"acc_stderr\": 0.03376922151252335,\n \"acc_norm\": 0.5694444444444444,\n \"acc_norm_stderr\": 0.03376922151252335\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7745098039215687,\n \"acc_stderr\": 0.029331162294251735,\n \"acc_norm\": 0.7745098039215687,\n \"acc_norm_stderr\": 0.029331162294251735\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8059071729957806,\n \"acc_stderr\": 0.025744902532290923,\n \"acc_norm\": 0.8059071729957806,\n \"acc_norm_stderr\": 0.025744902532290923\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6591928251121076,\n \"acc_stderr\": 0.0318114974705536,\n \"acc_norm\": 0.6591928251121076,\n \"acc_norm_stderr\": 0.0318114974705536\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7175572519083969,\n \"acc_stderr\": 0.03948406125768362,\n \"acc_norm\": 0.7175572519083969,\n \"acc_norm_stderr\": 0.03948406125768362\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7355371900826446,\n \"acc_stderr\": 0.040261875275912046,\n \"acc_norm\": 0.7355371900826446,\n \"acc_norm_stderr\": 0.040261875275912046\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7962962962962963,\n \"acc_stderr\": 0.03893542518824847,\n \"acc_norm\": 0.7962962962962963,\n \"acc_norm_stderr\": 0.03893542518824847\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7423312883435583,\n \"acc_stderr\": 0.03436150827846917,\n \"acc_norm\": 0.7423312883435583,\n \"acc_norm_stderr\": 0.03436150827846917\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.45535714285714285,\n \"acc_stderr\": 0.04726835553719099,\n \"acc_norm\": 0.45535714285714285,\n \"acc_norm_stderr\": 0.04726835553719099\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8349514563106796,\n \"acc_stderr\": 0.036756688322331886,\n \"acc_norm\": 0.8349514563106796,\n \"acc_norm_stderr\": 0.036756688322331886\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.9017094017094017,\n \"acc_stderr\": 0.019503444900757567,\n \"acc_norm\": 0.9017094017094017,\n \"acc_norm_stderr\": 0.019503444900757567\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.0440844002276808,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.0440844002276808\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8033205619412516,\n \"acc_stderr\": 0.014214138556913915,\n \"acc_norm\": 0.8033205619412516,\n \"acc_norm_stderr\": 0.014214138556913915\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.708092485549133,\n \"acc_stderr\": 0.024476994076247326,\n \"acc_norm\": 0.708092485549133,\n \"acc_norm_stderr\": 0.024476994076247326\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.43910614525139663,\n \"acc_stderr\": 0.016598022120580425,\n \"acc_norm\": 0.43910614525139663,\n \"acc_norm_stderr\": 0.016598022120580425\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7124183006535948,\n \"acc_stderr\": 0.02591780611714716,\n \"acc_norm\": 0.7124183006535948,\n \"acc_norm_stderr\": 0.02591780611714716\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6784565916398714,\n \"acc_stderr\": 0.026527724079528872,\n \"acc_norm\": 0.6784565916398714,\n \"acc_norm_stderr\": 0.026527724079528872\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6697530864197531,\n \"acc_stderr\": 0.026168298456732846,\n \"acc_norm\": 0.6697530864197531,\n \"acc_norm_stderr\": 0.026168298456732846\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4929078014184397,\n \"acc_stderr\": 0.02982449855912901,\n \"acc_norm\": 0.4929078014184397,\n \"acc_norm_stderr\": 0.02982449855912901\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4641460234680574,\n \"acc_stderr\": 0.012737361318730583,\n \"acc_norm\": 0.4641460234680574,\n \"acc_norm_stderr\": 0.012737361318730583\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6875,\n \"acc_stderr\": 0.02815637344037142,\n \"acc_norm\": 0.6875,\n \"acc_norm_stderr\": 0.02815637344037142\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6372549019607843,\n \"acc_stderr\": 0.019450768432505514,\n \"acc_norm\": 0.6372549019607843,\n \"acc_norm_stderr\": 0.019450768432505514\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7090909090909091,\n \"acc_stderr\": 0.04350271442923243,\n \"acc_norm\": 0.7090909090909091,\n \"acc_norm_stderr\": 0.04350271442923243\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7591836734693878,\n \"acc_stderr\": 0.02737294220178816,\n \"acc_norm\": 0.7591836734693878,\n \"acc_norm_stderr\": 0.02737294220178816\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8258706467661692,\n \"acc_stderr\": 0.026814951200421606,\n \"acc_norm\": 0.8258706467661692,\n \"acc_norm_stderr\": 0.026814951200421606\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.86,\n \"acc_stderr\": 0.0348735088019777,\n \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.0348735088019777\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.45180722891566266,\n \"acc_stderr\": 0.03874371556587953,\n \"acc_norm\": 0.45180722891566266,\n \"acc_norm_stderr\": 0.03874371556587953\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8187134502923976,\n \"acc_stderr\": 0.02954774168764004,\n \"acc_norm\": 0.8187134502923976,\n \"acc_norm_stderr\": 0.02954774168764004\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.35128518971848227,\n \"mc1_stderr\": 0.0167113581635444,\n \"mc2\": 0.5074966311746293,\n \"mc2_stderr\": 0.01566752376256644\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.739542225730071,\n \"acc_stderr\": 0.012334833671998285\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3601213040181956,\n \"acc_stderr\": 0.013222559423250485\n }\n}\n```", "repo_url": "https://huggingface.co/Walmart-the-bag/Yi-6B-Infinity-Chat", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|arc:challenge|25_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|gsm8k|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hellaswag|10_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-27T12-43-24.987428.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["**/details_harness|winogrande|5_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-27T12-43-24.987428.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_27T12_43_24.987428", "path": ["results_2023-12-27T12-43-24.987428.parquet"]}, {"split": "latest", "path": ["results_2023-12-27T12-43-24.987428.parquet"]}]}]}
2023-12-27T12:46:00+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of Walmart-the-bag/Yi-6B-Infinity-Chat Dataset automatically created during the evaluation run of model Walmart-the-bag/Yi-6B-Infinity-Chat on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-27T12:43:24.987428(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of Walmart-the-bag/Yi-6B-Infinity-Chat\n\n\n\nDataset automatically created during the evaluation run of model Walmart-the-bag/Yi-6B-Infinity-Chat on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-27T12:43:24.987428(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of Walmart-the-bag/Yi-6B-Infinity-Chat\n\n\n\nDataset automatically created during the evaluation run of model Walmart-the-bag/Yi-6B-Infinity-Chat on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-27T12:43:24.987428(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 195, 66, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Walmart-the-bag/Yi-6B-Infinity-Chat\n\n\n\nDataset automatically created during the evaluation run of model Walmart-the-bag/Yi-6B-Infinity-Chat on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-27T12:43:24.987428(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]" ]
11523196475823a5f57c1998f859d0e84830c738
# VBench Human Annotation ## Dataset Description - **Homepage:** [VBench](https://vchitect.github.io/VBench-project/) - **Repository:** [VBench-Code](https://github.com/Vchitect/VBench) - **Paper:** [2311.17982](https://arxiv.org/abs/2311.17982) - **Point of Contact:** mailto:[Ziqi]([email protected])
Vchitect/VBench_human_annotation
[ "size_categories:1K<n<10K", "language:en", "license:mit", "arxiv:2311.17982", "region:us" ]
2023-12-27T12:46:46+00:00
{"language": ["en"], "license": "mit", "size_categories": ["1K<n<10K"], "pretty_name": "VBench Human Anno", "extra_gated_prompt": "You agree to not use the data to conduct experiments that cause harm to human subjects.", "extra_gated_fields": {"Name": "text", "Company/Organization": "text", "E-Mail": "text"}}
2024-01-04T07:42:45+00:00
[ "2311.17982" ]
[ "en" ]
TAGS #size_categories-1K<n<10K #language-English #license-mit #arxiv-2311.17982 #region-us
# VBench Human Annotation ## Dataset Description - Homepage: VBench - Repository: VBench-Code - Paper: 2311.17982 - Point of Contact: mailto:Ziqi
[ "# VBench Human Annotation", "## Dataset Description\n\n- Homepage: VBench\n- Repository: VBench-Code\n- Paper: 2311.17982\n- Point of Contact: mailto:Ziqi" ]
[ "TAGS\n#size_categories-1K<n<10K #language-English #license-mit #arxiv-2311.17982 #region-us \n", "# VBench Human Annotation", "## Dataset Description\n\n- Homepage: VBench\n- Repository: VBench-Code\n- Paper: 2311.17982\n- Point of Contact: mailto:Ziqi" ]
[ 36, 7, 38 ]
[ "passage: TAGS\n#size_categories-1K<n<10K #language-English #license-mit #arxiv-2311.17982 #region-us \n# VBench Human Annotation## Dataset Description\n\n- Homepage: VBench\n- Repository: VBench-Code\n- Paper: 2311.17982\n- Point of Contact: mailto:Ziqi" ]
a2d153eb15f72f6754922daf01da2d358942bca6
# Dataset Card for Evaluation run of ericpolewski/AIRIC-The-Mistral <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [ericpolewski/AIRIC-The-Mistral](https://huggingface.co/ericpolewski/AIRIC-The-Mistral) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_ericpolewski__AIRIC-The-Mistral", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-27T12:44:47.961530](https://huggingface.co/datasets/open-llm-leaderboard/details_ericpolewski__AIRIC-The-Mistral/blob/main/results_2023-12-27T12-44-47.961530.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6039985368189299, "acc_stderr": 0.032978897634621786, "acc_norm": 0.6103242147836283, "acc_norm_stderr": 0.03365907243674515, "mc1": 0.32558139534883723, "mc1_stderr": 0.01640398946990783, "mc2": 0.48243440199003346, "mc2_stderr": 0.014709550914921755 }, "harness|arc:challenge|25": { "acc": 0.5571672354948806, "acc_stderr": 0.014515573873348913, "acc_norm": 0.5998293515358362, "acc_norm_stderr": 0.01431719778780918 }, "harness|hellaswag|10": { "acc": 0.6291575383389763, "acc_stderr": 0.004820431839600027, "acc_norm": 0.8298147779326828, "acc_norm_stderr": 0.0037502741958275972 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.29, "acc_stderr": 0.04560480215720683, "acc_norm": 0.29, "acc_norm_stderr": 0.04560480215720683 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.5925925925925926, "acc_stderr": 0.042446332383532265, "acc_norm": 0.5925925925925926, "acc_norm_stderr": 0.042446332383532265 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6578947368421053, "acc_stderr": 0.038607315993160904, "acc_norm": 0.6578947368421053, "acc_norm_stderr": 0.038607315993160904 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.52, "acc_stderr": 0.050211673156867795, "acc_norm": 0.52, "acc_norm_stderr": 0.050211673156867795 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6679245283018868, "acc_stderr": 0.02898545565233439, "acc_norm": 0.6679245283018868, "acc_norm_stderr": 0.02898545565233439 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.6527777777777778, "acc_stderr": 0.039812405437178615, "acc_norm": 0.6527777777777778, "acc_norm_stderr": 0.039812405437178615 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.49, "acc_stderr": 0.05024183937956912, "acc_norm": 0.49, "acc_norm_stderr": 0.05024183937956912 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.5, "acc_stderr": 0.050251890762960605, "acc_norm": 0.5, "acc_norm_stderr": 0.050251890762960605 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.38, "acc_stderr": 0.04878317312145633, "acc_norm": 0.38, "acc_norm_stderr": 0.04878317312145633 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6358381502890174, "acc_stderr": 0.03669072477416907, "acc_norm": 0.6358381502890174, "acc_norm_stderr": 0.03669072477416907 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.3333333333333333, "acc_stderr": 0.04690650298201942, "acc_norm": 0.3333333333333333, "acc_norm_stderr": 0.04690650298201942 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.75, "acc_stderr": 0.04351941398892446, "acc_norm": 0.75, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5106382978723404, "acc_stderr": 0.03267862331014063, "acc_norm": 0.5106382978723404, "acc_norm_stderr": 0.03267862331014063 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.43859649122807015, "acc_stderr": 0.04668000738510455, "acc_norm": 0.43859649122807015, "acc_norm_stderr": 0.04668000738510455 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5379310344827586, "acc_stderr": 0.04154659671707548, "acc_norm": 0.5379310344827586, "acc_norm_stderr": 0.04154659671707548 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.37566137566137564, "acc_stderr": 0.024942368931159784, "acc_norm": 0.37566137566137564, "acc_norm_stderr": 0.024942368931159784 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.38095238095238093, "acc_stderr": 0.04343525428949097, "acc_norm": 0.38095238095238093, "acc_norm_stderr": 0.04343525428949097 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.41, "acc_stderr": 0.049431107042371025, "acc_norm": 0.41, "acc_norm_stderr": 0.049431107042371025 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7161290322580646, "acc_stderr": 0.025649381063029265, "acc_norm": 0.7161290322580646, "acc_norm_stderr": 0.025649381063029265 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5123152709359606, "acc_stderr": 0.035169204442208966, "acc_norm": 0.5123152709359606, "acc_norm_stderr": 0.035169204442208966 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.63, "acc_stderr": 0.048523658709391, "acc_norm": 0.63, "acc_norm_stderr": 0.048523658709391 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7090909090909091, "acc_stderr": 0.03546563019624336, "acc_norm": 0.7090909090909091, "acc_norm_stderr": 0.03546563019624336 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7323232323232324, "acc_stderr": 0.03154449888270286, "acc_norm": 0.7323232323232324, "acc_norm_stderr": 0.03154449888270286 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8186528497409327, "acc_stderr": 0.02780703236068609, "acc_norm": 0.8186528497409327, "acc_norm_stderr": 0.02780703236068609 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6, "acc_stderr": 0.02483881198803316, "acc_norm": 0.6, "acc_norm_stderr": 0.02483881198803316 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.362962962962963, "acc_stderr": 0.029318203645206865, "acc_norm": 0.362962962962963, "acc_norm_stderr": 0.029318203645206865 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6386554621848739, "acc_stderr": 0.031204691225150023, "acc_norm": 0.6386554621848739, "acc_norm_stderr": 0.031204691225150023 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.37748344370860926, "acc_stderr": 0.03958027231121569, "acc_norm": 0.37748344370860926, "acc_norm_stderr": 0.03958027231121569 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.7798165137614679, "acc_stderr": 0.01776597865232756, "acc_norm": 0.7798165137614679, "acc_norm_stderr": 0.01776597865232756 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.47685185185185186, "acc_stderr": 0.03406315360711507, "acc_norm": 0.47685185185185186, "acc_norm_stderr": 0.03406315360711507 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7892156862745098, "acc_stderr": 0.028626547912437406, "acc_norm": 0.7892156862745098, "acc_norm_stderr": 0.028626547912437406 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7679324894514767, "acc_stderr": 0.02747974455080851, "acc_norm": 0.7679324894514767, "acc_norm_stderr": 0.02747974455080851 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6771300448430493, "acc_stderr": 0.03138147637575499, "acc_norm": 0.6771300448430493, "acc_norm_stderr": 0.03138147637575499 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.732824427480916, "acc_stderr": 0.03880848301082396, "acc_norm": 0.732824427480916, "acc_norm_stderr": 0.03880848301082396 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7933884297520661, "acc_stderr": 0.03695980128098822, "acc_norm": 0.7933884297520661, "acc_norm_stderr": 0.03695980128098822 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.75, "acc_stderr": 0.04186091791394607, "acc_norm": 0.75, "acc_norm_stderr": 0.04186091791394607 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7239263803680982, "acc_stderr": 0.035123852837050475, "acc_norm": 0.7239263803680982, "acc_norm_stderr": 0.035123852837050475 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.35714285714285715, "acc_stderr": 0.04547960999764376, "acc_norm": 0.35714285714285715, "acc_norm_stderr": 0.04547960999764376 }, "harness|hendrycksTest-management|5": { "acc": 0.7864077669902912, "acc_stderr": 0.040580420156460344, "acc_norm": 0.7864077669902912, "acc_norm_stderr": 0.040580420156460344 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8632478632478633, "acc_stderr": 0.02250903393707781, "acc_norm": 0.8632478632478633, "acc_norm_stderr": 0.02250903393707781 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.68, "acc_stderr": 0.04688261722621504, "acc_norm": 0.68, "acc_norm_stderr": 0.04688261722621504 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.7828863346104725, "acc_stderr": 0.014743125394823291, "acc_norm": 0.7828863346104725, "acc_norm_stderr": 0.014743125394823291 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7052023121387283, "acc_stderr": 0.024547617794803828, "acc_norm": 0.7052023121387283, "acc_norm_stderr": 0.024547617794803828 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.22793296089385476, "acc_stderr": 0.014030149950805097, "acc_norm": 0.22793296089385476, "acc_norm_stderr": 0.014030149950805097 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7124183006535948, "acc_stderr": 0.02591780611714716, "acc_norm": 0.7124183006535948, "acc_norm_stderr": 0.02591780611714716 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.6655948553054662, "acc_stderr": 0.026795422327893934, "acc_norm": 0.6655948553054662, "acc_norm_stderr": 0.026795422327893934 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.6882716049382716, "acc_stderr": 0.02577311116963045, "acc_norm": 0.6882716049382716, "acc_norm_stderr": 0.02577311116963045 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.4645390070921986, "acc_stderr": 0.029752389657427047, "acc_norm": 0.4645390070921986, "acc_norm_stderr": 0.029752389657427047 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4230769230769231, "acc_stderr": 0.012618204066588392, "acc_norm": 0.4230769230769231, "acc_norm_stderr": 0.012618204066588392 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6029411764705882, "acc_stderr": 0.02972215209928006, "acc_norm": 0.6029411764705882, "acc_norm_stderr": 0.02972215209928006 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.5947712418300654, "acc_stderr": 0.019861155193829156, "acc_norm": 0.5947712418300654, "acc_norm_stderr": 0.019861155193829156 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6090909090909091, "acc_stderr": 0.046737523336702384, "acc_norm": 0.6090909090909091, "acc_norm_stderr": 0.046737523336702384 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.6448979591836734, "acc_stderr": 0.030635655150387638, "acc_norm": 0.6448979591836734, "acc_norm_stderr": 0.030635655150387638 }, "harness|hendrycksTest-sociology|5": { "acc": 0.835820895522388, "acc_stderr": 0.026193923544454115, "acc_norm": 0.835820895522388, "acc_norm_stderr": 0.026193923544454115 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.85, "acc_stderr": 0.0358870281282637, "acc_norm": 0.85, "acc_norm_stderr": 0.0358870281282637 }, "harness|hendrycksTest-virology|5": { "acc": 0.536144578313253, "acc_stderr": 0.03882310850890594, "acc_norm": 0.536144578313253, "acc_norm_stderr": 0.03882310850890594 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8362573099415205, "acc_stderr": 0.028380919596145866, "acc_norm": 0.8362573099415205, "acc_norm_stderr": 0.028380919596145866 }, "harness|truthfulqa:mc|0": { "mc1": 0.32558139534883723, "mc1_stderr": 0.01640398946990783, "mc2": 0.48243440199003346, "mc2_stderr": 0.014709550914921755 }, "harness|winogrande|5": { "acc": 0.7695343330702447, "acc_stderr": 0.011835872164836673 }, "harness|gsm8k|5": { "acc": 0.30856709628506446, "acc_stderr": 0.012723076049815882 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_ericpolewski__AIRIC-The-Mistral
[ "region:us" ]
2023-12-27T12:47:02+00:00
{"pretty_name": "Evaluation run of ericpolewski/AIRIC-The-Mistral", "dataset_summary": "Dataset automatically created during the evaluation run of model [ericpolewski/AIRIC-The-Mistral](https://huggingface.co/ericpolewski/AIRIC-The-Mistral) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_ericpolewski__AIRIC-The-Mistral\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-27T12:44:47.961530](https://huggingface.co/datasets/open-llm-leaderboard/details_ericpolewski__AIRIC-The-Mistral/blob/main/results_2023-12-27T12-44-47.961530.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6039985368189299,\n \"acc_stderr\": 0.032978897634621786,\n \"acc_norm\": 0.6103242147836283,\n \"acc_norm_stderr\": 0.03365907243674515,\n \"mc1\": 0.32558139534883723,\n \"mc1_stderr\": 0.01640398946990783,\n \"mc2\": 0.48243440199003346,\n \"mc2_stderr\": 0.014709550914921755\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5571672354948806,\n \"acc_stderr\": 0.014515573873348913,\n \"acc_norm\": 0.5998293515358362,\n \"acc_norm_stderr\": 0.01431719778780918\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6291575383389763,\n \"acc_stderr\": 0.004820431839600027,\n \"acc_norm\": 0.8298147779326828,\n \"acc_norm_stderr\": 0.0037502741958275972\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.04560480215720683,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.04560480215720683\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5925925925925926,\n \"acc_stderr\": 0.042446332383532265,\n \"acc_norm\": 0.5925925925925926,\n \"acc_norm_stderr\": 0.042446332383532265\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6578947368421053,\n \"acc_stderr\": 0.038607315993160904,\n \"acc_norm\": 0.6578947368421053,\n \"acc_norm_stderr\": 0.038607315993160904\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6679245283018868,\n \"acc_stderr\": 0.02898545565233439,\n \"acc_norm\": 0.6679245283018868,\n \"acc_norm_stderr\": 0.02898545565233439\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6527777777777778,\n \"acc_stderr\": 0.039812405437178615,\n \"acc_norm\": 0.6527777777777778,\n \"acc_norm_stderr\": 0.039812405437178615\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145633,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145633\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6358381502890174,\n \"acc_stderr\": 0.03669072477416907,\n \"acc_norm\": 0.6358381502890174,\n \"acc_norm_stderr\": 0.03669072477416907\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.3333333333333333,\n \"acc_stderr\": 0.04690650298201942,\n \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.04690650298201942\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5106382978723404,\n \"acc_stderr\": 0.03267862331014063,\n \"acc_norm\": 0.5106382978723404,\n \"acc_norm_stderr\": 0.03267862331014063\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.43859649122807015,\n \"acc_stderr\": 0.04668000738510455,\n \"acc_norm\": 0.43859649122807015,\n \"acc_norm_stderr\": 0.04668000738510455\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5379310344827586,\n \"acc_stderr\": 0.04154659671707548,\n \"acc_norm\": 0.5379310344827586,\n \"acc_norm_stderr\": 0.04154659671707548\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.37566137566137564,\n \"acc_stderr\": 0.024942368931159784,\n \"acc_norm\": 0.37566137566137564,\n \"acc_norm_stderr\": 0.024942368931159784\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.38095238095238093,\n \"acc_stderr\": 0.04343525428949097,\n \"acc_norm\": 0.38095238095238093,\n \"acc_norm_stderr\": 0.04343525428949097\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.41,\n \"acc_stderr\": 0.049431107042371025,\n \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.049431107042371025\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7161290322580646,\n \"acc_stderr\": 0.025649381063029265,\n \"acc_norm\": 0.7161290322580646,\n \"acc_norm_stderr\": 0.025649381063029265\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5123152709359606,\n \"acc_stderr\": 0.035169204442208966,\n \"acc_norm\": 0.5123152709359606,\n \"acc_norm_stderr\": 0.035169204442208966\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.63,\n \"acc_stderr\": 0.048523658709391,\n \"acc_norm\": 0.63,\n \"acc_norm_stderr\": 0.048523658709391\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7090909090909091,\n \"acc_stderr\": 0.03546563019624336,\n \"acc_norm\": 0.7090909090909091,\n \"acc_norm_stderr\": 0.03546563019624336\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7323232323232324,\n \"acc_stderr\": 0.03154449888270286,\n \"acc_norm\": 0.7323232323232324,\n \"acc_norm_stderr\": 0.03154449888270286\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8186528497409327,\n \"acc_stderr\": 0.02780703236068609,\n \"acc_norm\": 0.8186528497409327,\n \"acc_norm_stderr\": 0.02780703236068609\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.02483881198803316,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.02483881198803316\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.362962962962963,\n \"acc_stderr\": 0.029318203645206865,\n \"acc_norm\": 0.362962962962963,\n \"acc_norm_stderr\": 0.029318203645206865\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6386554621848739,\n \"acc_stderr\": 0.031204691225150023,\n \"acc_norm\": 0.6386554621848739,\n \"acc_norm_stderr\": 0.031204691225150023\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.37748344370860926,\n \"acc_stderr\": 0.03958027231121569,\n \"acc_norm\": 0.37748344370860926,\n \"acc_norm_stderr\": 0.03958027231121569\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.7798165137614679,\n \"acc_stderr\": 0.01776597865232756,\n \"acc_norm\": 0.7798165137614679,\n \"acc_norm_stderr\": 0.01776597865232756\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.47685185185185186,\n \"acc_stderr\": 0.03406315360711507,\n \"acc_norm\": 0.47685185185185186,\n \"acc_norm_stderr\": 0.03406315360711507\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7892156862745098,\n \"acc_stderr\": 0.028626547912437406,\n \"acc_norm\": 0.7892156862745098,\n \"acc_norm_stderr\": 0.028626547912437406\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7679324894514767,\n \"acc_stderr\": 0.02747974455080851,\n \"acc_norm\": 0.7679324894514767,\n \"acc_norm_stderr\": 0.02747974455080851\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6771300448430493,\n \"acc_stderr\": 0.03138147637575499,\n \"acc_norm\": 0.6771300448430493,\n \"acc_norm_stderr\": 0.03138147637575499\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.732824427480916,\n \"acc_stderr\": 0.03880848301082396,\n \"acc_norm\": 0.732824427480916,\n \"acc_norm_stderr\": 0.03880848301082396\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7933884297520661,\n \"acc_stderr\": 0.03695980128098822,\n \"acc_norm\": 0.7933884297520661,\n \"acc_norm_stderr\": 0.03695980128098822\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04186091791394607,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04186091791394607\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7239263803680982,\n \"acc_stderr\": 0.035123852837050475,\n \"acc_norm\": 0.7239263803680982,\n \"acc_norm_stderr\": 0.035123852837050475\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.35714285714285715,\n \"acc_stderr\": 0.04547960999764376,\n \"acc_norm\": 0.35714285714285715,\n \"acc_norm_stderr\": 0.04547960999764376\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7864077669902912,\n \"acc_stderr\": 0.040580420156460344,\n \"acc_norm\": 0.7864077669902912,\n \"acc_norm_stderr\": 0.040580420156460344\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8632478632478633,\n \"acc_stderr\": 0.02250903393707781,\n \"acc_norm\": 0.8632478632478633,\n \"acc_norm_stderr\": 0.02250903393707781\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7828863346104725,\n \"acc_stderr\": 0.014743125394823291,\n \"acc_norm\": 0.7828863346104725,\n \"acc_norm_stderr\": 0.014743125394823291\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7052023121387283,\n \"acc_stderr\": 0.024547617794803828,\n \"acc_norm\": 0.7052023121387283,\n \"acc_norm_stderr\": 0.024547617794803828\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.22793296089385476,\n \"acc_stderr\": 0.014030149950805097,\n \"acc_norm\": 0.22793296089385476,\n \"acc_norm_stderr\": 0.014030149950805097\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7124183006535948,\n \"acc_stderr\": 0.02591780611714716,\n \"acc_norm\": 0.7124183006535948,\n \"acc_norm_stderr\": 0.02591780611714716\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6655948553054662,\n \"acc_stderr\": 0.026795422327893934,\n \"acc_norm\": 0.6655948553054662,\n \"acc_norm_stderr\": 0.026795422327893934\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6882716049382716,\n \"acc_stderr\": 0.02577311116963045,\n \"acc_norm\": 0.6882716049382716,\n \"acc_norm_stderr\": 0.02577311116963045\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4645390070921986,\n \"acc_stderr\": 0.029752389657427047,\n \"acc_norm\": 0.4645390070921986,\n \"acc_norm_stderr\": 0.029752389657427047\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4230769230769231,\n \"acc_stderr\": 0.012618204066588392,\n \"acc_norm\": 0.4230769230769231,\n \"acc_norm_stderr\": 0.012618204066588392\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6029411764705882,\n \"acc_stderr\": 0.02972215209928006,\n \"acc_norm\": 0.6029411764705882,\n \"acc_norm_stderr\": 0.02972215209928006\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.5947712418300654,\n \"acc_stderr\": 0.019861155193829156,\n \"acc_norm\": 0.5947712418300654,\n \"acc_norm_stderr\": 0.019861155193829156\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6090909090909091,\n \"acc_stderr\": 0.046737523336702384,\n \"acc_norm\": 0.6090909090909091,\n \"acc_norm_stderr\": 0.046737523336702384\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6448979591836734,\n \"acc_stderr\": 0.030635655150387638,\n \"acc_norm\": 0.6448979591836734,\n \"acc_norm_stderr\": 0.030635655150387638\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.835820895522388,\n \"acc_stderr\": 0.026193923544454115,\n \"acc_norm\": 0.835820895522388,\n \"acc_norm_stderr\": 0.026193923544454115\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.85,\n \"acc_stderr\": 0.0358870281282637,\n \"acc_norm\": 0.85,\n \"acc_norm_stderr\": 0.0358870281282637\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.536144578313253,\n \"acc_stderr\": 0.03882310850890594,\n \"acc_norm\": 0.536144578313253,\n \"acc_norm_stderr\": 0.03882310850890594\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.32558139534883723,\n \"mc1_stderr\": 0.01640398946990783,\n \"mc2\": 0.48243440199003346,\n \"mc2_stderr\": 0.014709550914921755\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7695343330702447,\n \"acc_stderr\": 0.011835872164836673\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.30856709628506446,\n \"acc_stderr\": 0.012723076049815882\n }\n}\n```", "repo_url": "https://huggingface.co/ericpolewski/AIRIC-The-Mistral", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|arc:challenge|25_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|gsm8k|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hellaswag|10_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-27T12-44-47.961530.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["**/details_harness|winogrande|5_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-27T12-44-47.961530.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_27T12_44_47.961530", "path": ["results_2023-12-27T12-44-47.961530.parquet"]}, {"split": "latest", "path": ["results_2023-12-27T12-44-47.961530.parquet"]}]}]}
2023-12-27T12:47:30+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of ericpolewski/AIRIC-The-Mistral Dataset automatically created during the evaluation run of model ericpolewski/AIRIC-The-Mistral on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-27T12:44:47.961530(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of ericpolewski/AIRIC-The-Mistral\n\n\n\nDataset automatically created during the evaluation run of model ericpolewski/AIRIC-The-Mistral on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-27T12:44:47.961530(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of ericpolewski/AIRIC-The-Mistral\n\n\n\nDataset automatically created during the evaluation run of model ericpolewski/AIRIC-The-Mistral on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-27T12:44:47.961530(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 185, 67, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of ericpolewski/AIRIC-The-Mistral\n\n\n\nDataset automatically created during the evaluation run of model ericpolewski/AIRIC-The-Mistral on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-27T12:44:47.961530(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
baa868b943141b48656bdce15722171df6c2c1a5
# Portuguese Hate Speech Dataset (TuPy) The Portuguese hate speech dataset (TuPy) is an annotated corpus designed to facilitate the development of advanced hate speech detection models using machine learning (ML) and natural language processing (NLP) techniques. TuPy is comprised of 10,000 (ten thousand) unpublished, annotated, and anonymized documents collected on Twitter (currently known as X) in 2023. This repository is organized as follows: ```sh root. ├── binary : binary dataset (including training and testing split) ├── multilabel : multilabel dataset (including training and testing split) └── README.md : documentation and card metadata ``` TuPy is one of the datasets comprising the expanded dataset called [TuPy-E](https://huggingface.co/datasets/Silly-Machine/TuPyE-Dataset), both under the ownership of Silly Machine. We highly recommend reading the [associated research paper](https://arxiv.org/abs/2312.17704) to gain comprehensive insights into the advancements integrated into this extension. ## Security measures To safeguard user identity and uphold the integrity of this dataset, all user mentions have been anonymized as "@user," and any references to external websites have been omitted ## Annotation and voting process To generate the binary matrices, we utilized a simple voting process. Each document underwent three separate evaluations. If a document received two or more identical classifications, the assigned value was set to 1; otherwise, it was marked as 0. The annotated raw data can be accessed in the [project repository](https://github.com/Silly-Machine/TuPy-Dataset). The following table offers a brief summary of the annotators' profiles and qualifications: #### Table 1 – Annotators | Annotator | Gender | Education | Political | Color | |--------------|--------|-----------------------------------------------|------------|--------| | Annotator 1 | Female | Ph.D. Candidate in civil engineering | Far-left | White | | Annotator 2 | Male | Master's candidate in human rights | Far-left | Black | | Annotator 3 | Female | Master's degree in behavioral psychology | Liberal | White | | Annotator 4 | Male | Master's degree in behavioral psychology | Right-wing | Black | | Annotator 5 | Female | Ph.D. Candidate in behavioral psychology | Liberal | Black | | Annotator 6 | Male | Ph.D. Candidate in linguistics | Far-left | White | | Annotator 7 | Female | Ph.D. Candidate in civil engineering | Liberal | White | | Annotator 8 | Male | Ph.D. Candidate in civil engineering | Liberal | Black | | Annotator 9 | Male | Master's degree in behavioral psychology | Far-left | White | ## Data structure A data point comprises the tweet text (a string) along with thirteen categories, each category is assigned a value of 0 when there is an absence of aggressive or hateful content and a value of 1 when such content is present. These values represent the consensus of annotators regarding the presence of aggressive, hate, ageism, aporophobia, body shame, capacitism, lgbtphobia, political, racism, religious intolerance, misogyny, xenophobia, and others. An illustration from the multilabel TuPy dataset is depicted below: ```python { text: "e tem pobre de direita imbecil que ainda defendia a manutenção da política de preços atrelada ao dólar link", aggressive: 1, hate: 1, ageism: 0, aporophobia: 1, body shame: 0, capacitism: 0, lgbtphobia: 0, political: 1, racism : 0, religious intolerance : 0, misogyny : 0, xenophobia : 0, other : 0 } ``` # Dataset content Table 2 provides a detailed breakdown of the dataset, delineating the volume of data based on the occurrence of aggressive speech and the manifestation of hate speech within the documents #### Table 2 - Count of non-aggressive and aggressive documents | Label | Count | |----------------------|--------| | Non-aggressive | 8013 | | Aggressive - Not hate| 689 | | Aggressive - Hate | 1298 | | Total | 10000 | Table 3 provides a detailed analysis of the dataset, delineating the data volume in relation to the occurrence of distinct categories of hate speech. #### Table 3 - Hate categories count | Label | Count | |--------------------------|-------| | Ageism | 53 | | Aporophobia | 61 | | Body shame | 120 | | Capacitism | 92 | | LGBTphobia | 96 | | Political | 532 | | Racism | 38 | | Religious intolerance | 28 | | Misogyny | 207 | | Xenophobia | 70 | | Other | 1 | | Total | 1298 | # BibTeX citation This dataset can be cited as follows: ```pyyhon @misc {silly-machine_2023, author = { {Silly-Machine} }, title = { TuPy-Dataset (Revision de6b18c) }, year = 2023, url = { https://huggingface.co/datasets/Silly-Machine/TuPy-Dataset }, doi = { 10.57967/hf/1529 }, publisher = { Hugging Face } } ``` # Acknowledge The TuPy project is the result of the development of Felipe Oliveira's thesis and the work of several collaborators. This project is financed by the Federal University of Rio de Janeiro ([UFRJ](https://ufrj.br/)) and the Alberto Luiz Coimbra Institute for Postgraduate Studies and Research in Engineering ([COPPE](https://coppe.ufrj.br/)).
Silly-Machine/TuPy-Dataset
[ "task_categories:text-classification", "annotations_creators:crowdsourced", "language_creators:Brazilian-Portuguese", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:original", "language:pt", "license:cc-by-4.0", "hate-speech-detection", "arxiv:2312.17704", "doi:10.57967/hf/1529", "region:us" ]
2023-12-27T12:59:25+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["Brazilian-Portuguese"], "language": ["pt"], "license": "cc-by-4.0", "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["original"], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "TuPy-Dataset", "language_bcp47": ["pt-BR"], "tags": ["hate-speech-detection"], "configs": [{"config_name": "multilabel", "data_files": [{"split": "train", "path": "multilabel/multilabel_train.csv"}, {"split": "test", "path": "multilabel/multilabel_test.csv"}]}, {"config_name": "binary", "data_files": [{"split": "train", "path": "binary/binary_train.csv"}, {"split": "test", "path": "binary/binary_test.csv"}]}]}
2024-01-01T14:43:46+00:00
[ "2312.17704" ]
[ "pt" ]
TAGS #task_categories-text-classification #annotations_creators-crowdsourced #language_creators-Brazilian-Portuguese #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Portuguese #license-cc-by-4.0 #hate-speech-detection #arxiv-2312.17704 #doi-10.57967/hf/1529 #region-us
Portuguese Hate Speech Dataset (TuPy) ===================================== The Portuguese hate speech dataset (TuPy) is an annotated corpus designed to facilitate the development of advanced hate speech detection models using machine learning (ML) and natural language processing (NLP) techniques. TuPy is comprised of 10,000 (ten thousand) unpublished, annotated, and anonymized documents collected on Twitter (currently known as X) in 2023. This repository is organized as follows: TuPy is one of the datasets comprising the expanded dataset called TuPy-E, both under the ownership of Silly Machine. We highly recommend reading the associated research paper to gain comprehensive insights into the advancements integrated into this extension. Security measures ----------------- To safeguard user identity and uphold the integrity of this dataset, all user mentions have been anonymized as "@user," and any references to external websites have been omitted Annotation and voting process ----------------------------- To generate the binary matrices, we utilized a simple voting process. Each document underwent three separate evaluations. If a document received two or more identical classifications, the assigned value was set to 1; otherwise, it was marked as 0. The annotated raw data can be accessed in the project repository. The following table offers a brief summary of the annotators' profiles and qualifications: #### Table 1 – Annotators Data structure -------------- A data point comprises the tweet text (a string) along with thirteen categories, each category is assigned a value of 0 when there is an absence of aggressive or hateful content and a value of 1 when such content is present. These values represent the consensus of annotators regarding the presence of aggressive, hate, ageism, aporophobia, body shame, capacitism, lgbtphobia, political, racism, religious intolerance, misogyny, xenophobia, and others. An illustration from the multilabel TuPy dataset is depicted below: Dataset content =============== Table 2 provides a detailed breakdown of the dataset, delineating the volume of data based on the occurrence of aggressive speech and the manifestation of hate speech within the documents #### Table 2 - Count of non-aggressive and aggressive documents Table 3 provides a detailed analysis of the dataset, delineating the data volume in relation to the occurrence of distinct categories of hate speech. #### Table 3 - Hate categories count BibTeX citation =============== This dataset can be cited as follows: Acknowledge =========== The TuPy project is the result of the development of Felipe Oliveira's thesis and the work of several collaborators. This project is financed by the Federal University of Rio de Janeiro (UFRJ) and the Alberto Luiz Coimbra Institute for Postgraduate Studies and Research in Engineering (COPPE).
[ "#### Table 1 – Annotators\n\n\n\nData structure\n--------------\n\n\nA data point comprises the tweet text (a string) along with thirteen categories, each category is assigned a value of 0 when there is an absence of aggressive or hateful content and a value of 1 when such content is present. These values represent the consensus of annotators regarding the presence of aggressive, hate, ageism, aporophobia, body shame, capacitism, lgbtphobia, political, racism, religious intolerance, misogyny, xenophobia, and others. An illustration from the multilabel TuPy dataset is depicted below:\n\n\nDataset content\n===============\n\n\nTable 2 provides a detailed breakdown of the dataset, delineating the volume of data based on the occurrence of aggressive speech and the manifestation of hate speech within the documents", "#### Table 2 - Count of non-aggressive and aggressive documents\n\n\n\nTable 3 provides a detailed analysis of the dataset, delineating the data volume in relation to the occurrence of distinct categories of hate speech.", "#### Table 3 - Hate categories count\n\n\n\nBibTeX citation\n===============\n\n\nThis dataset can be cited as follows:\n\n\nAcknowledge\n===========\n\n\nThe TuPy project is the result of the development of Felipe Oliveira's thesis and the work of several collaborators. This project is financed by the Federal University of Rio de Janeiro (UFRJ) and the Alberto Luiz Coimbra Institute for Postgraduate Studies and Research in Engineering (COPPE)." ]
[ "TAGS\n#task_categories-text-classification #annotations_creators-crowdsourced #language_creators-Brazilian-Portuguese #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Portuguese #license-cc-by-4.0 #hate-speech-detection #arxiv-2312.17704 #doi-10.57967/hf/1529 #region-us \n", "#### Table 1 – Annotators\n\n\n\nData structure\n--------------\n\n\nA data point comprises the tweet text (a string) along with thirteen categories, each category is assigned a value of 0 when there is an absence of aggressive or hateful content and a value of 1 when such content is present. These values represent the consensus of annotators regarding the presence of aggressive, hate, ageism, aporophobia, body shame, capacitism, lgbtphobia, political, racism, religious intolerance, misogyny, xenophobia, and others. An illustration from the multilabel TuPy dataset is depicted below:\n\n\nDataset content\n===============\n\n\nTable 2 provides a detailed breakdown of the dataset, delineating the volume of data based on the occurrence of aggressive speech and the manifestation of hate speech within the documents", "#### Table 2 - Count of non-aggressive and aggressive documents\n\n\n\nTable 3 provides a detailed analysis of the dataset, delineating the data volume in relation to the occurrence of distinct categories of hate speech.", "#### Table 3 - Hate categories count\n\n\n\nBibTeX citation\n===============\n\n\nThis dataset can be cited as follows:\n\n\nAcknowledge\n===========\n\n\nThe TuPy project is the result of the development of Felipe Oliveira's thesis and the work of several collaborators. This project is financed by the Federal University of Rio de Janeiro (UFRJ) and the Alberto Luiz Coimbra Institute for Postgraduate Studies and Research in Engineering (COPPE)." ]
[ 117, 193, 49, 106 ]
[ "passage: TAGS\n#task_categories-text-classification #annotations_creators-crowdsourced #language_creators-Brazilian-Portuguese #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-original #language-Portuguese #license-cc-by-4.0 #hate-speech-detection #arxiv-2312.17704 #doi-10.57967/hf/1529 #region-us \n#### Table 1 – Annotators\n\n\n\nData structure\n--------------\n\n\nA data point comprises the tweet text (a string) along with thirteen categories, each category is assigned a value of 0 when there is an absence of aggressive or hateful content and a value of 1 when such content is present. These values represent the consensus of annotators regarding the presence of aggressive, hate, ageism, aporophobia, body shame, capacitism, lgbtphobia, political, racism, religious intolerance, misogyny, xenophobia, and others. An illustration from the multilabel TuPy dataset is depicted below:\n\n\nDataset content\n===============\n\n\nTable 2 provides a detailed breakdown of the dataset, delineating the volume of data based on the occurrence of aggressive speech and the manifestation of hate speech within the documents#### Table 2 - Count of non-aggressive and aggressive documents\n\n\n\nTable 3 provides a detailed analysis of the dataset, delineating the data volume in relation to the occurrence of distinct categories of hate speech.#### Table 3 - Hate categories count\n\n\n\nBibTeX citation\n===============\n\n\nThis dataset can be cited as follows:\n\n\nAcknowledge\n===========\n\n\nThe TuPy project is the result of the development of Felipe Oliveira's thesis and the work of several collaborators. This project is financed by the Federal University of Rio de Janeiro (UFRJ) and the Alberto Luiz Coimbra Institute for Postgraduate Studies and Research in Engineering (COPPE)." ]
8f408a76b2aeffbce5f00779c7c37ce51e2dd175
[oasst2](https://huggingface.co/datasets/OpenAssistant/oasst2) in a friendlier format
euclaise/oasst2_rank
[ "license:apache-2.0", "region:us" ]
2023-12-27T13:00:05+00:00
{"license": "apache-2.0", "dataset_info": {"features": [{"name": "history", "list": [{"name": "role", "dtype": "string"}, {"name": "text", "dtype": "string"}]}, {"name": "prompt", "dtype": "string"}, {"name": "completions", "list": [{"name": "labels", "struct": [{"name": "creativity", "struct": [{"name": "count", "dtype": "int64"}, {"name": "value", "dtype": "float64"}]}, {"name": "fails_task", "struct": [{"name": "count", "dtype": "int64"}, {"name": "value", "dtype": "float64"}]}, {"name": "hate_speech", "struct": [{"name": "count", "dtype": "int64"}, {"name": "value", "dtype": "float64"}]}, {"name": "helpfulness", "struct": [{"name": "count", "dtype": "int64"}, {"name": "value", "dtype": "float64"}]}, {"name": "humor", "struct": [{"name": "count", "dtype": "int64"}, {"name": "value", "dtype": "float64"}]}, {"name": "lang_mismatch", "struct": [{"name": "count", "dtype": "int64"}, {"name": "value", "dtype": "float64"}]}, {"name": "moral_judgement", "struct": [{"name": "count", "dtype": "int64"}, {"name": "value", "dtype": "float64"}]}, {"name": "not_appropriate", "struct": [{"name": "count", "dtype": "int64"}, {"name": "value", "dtype": "float64"}]}, {"name": "pii", "struct": [{"name": "count", "dtype": "int64"}, {"name": "value", "dtype": "float64"}]}, {"name": "political_content", "struct": [{"name": "count", "dtype": "int64"}, {"name": "value", "dtype": "float64"}]}, {"name": "quality", "struct": [{"name": "count", "dtype": "int64"}, {"name": "value", "dtype": "float64"}]}, {"name": "sexual_content", "struct": [{"name": "count", "dtype": "int64"}, {"name": "value", "dtype": "float64"}]}, {"name": "spam", "struct": [{"name": "count", "dtype": "int64"}, {"name": "value", "dtype": "float64"}]}, {"name": "toxicity", "struct": [{"name": "count", "dtype": "int64"}, {"name": "value", "dtype": "float64"}]}, {"name": "violence", "struct": [{"name": "count", "dtype": "int64"}, {"name": "value", "dtype": "float64"}]}]}, {"name": "rank", "dtype": "int64"}, {"name": "text", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 106295033, "num_examples": 28383}], "download_size": 49057236, "dataset_size": 106295033}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-12-27T13:10:44+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
oasst2 in a friendlier format
[]
[ "TAGS\n#license-apache-2.0 #region-us \n" ]
[ 14 ]
[ "passage: TAGS\n#license-apache-2.0 #region-us \n" ]
a26926bcebbd7a26ed6744d5f40bf307a3c92e83
# Dataset Card for Evaluation run of SanjiWatsuki/openchat-3.5-1210-starling-slerp <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [SanjiWatsuki/openchat-3.5-1210-starling-slerp](https://huggingface.co/SanjiWatsuki/openchat-3.5-1210-starling-slerp) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_SanjiWatsuki__openchat-3.5-1210-starling-slerp", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-27T12:59:24.501037](https://huggingface.co/datasets/open-llm-leaderboard/details_SanjiWatsuki__openchat-3.5-1210-starling-slerp/blob/main/results_2023-12-27T12-59-24.501037.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6530257167647132, "acc_stderr": 0.031906035120016406, "acc_norm": 0.6537129491623167, "acc_norm_stderr": 0.03255707134476186, "mc1": 0.3378212974296206, "mc1_stderr": 0.01655716732251688, "mc2": 0.4992288323014176, "mc2_stderr": 0.015334932030447291 }, "harness|arc:challenge|25": { "acc": 0.6075085324232082, "acc_stderr": 0.014269634635670728, "acc_norm": 0.6390784982935154, "acc_norm_stderr": 0.014034761386175452 }, "harness|hellaswag|10": { "acc": 0.6713802031467835, "acc_stderr": 0.004687514708345319, "acc_norm": 0.8527185819557856, "acc_norm_stderr": 0.003536619673019997 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.38, "acc_stderr": 0.048783173121456316, "acc_norm": 0.38, "acc_norm_stderr": 0.048783173121456316 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6148148148148148, "acc_stderr": 0.04203921040156279, "acc_norm": 0.6148148148148148, "acc_norm_stderr": 0.04203921040156279 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6710526315789473, "acc_stderr": 0.03823428969926605, "acc_norm": 0.6710526315789473, "acc_norm_stderr": 0.03823428969926605 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.65, "acc_stderr": 0.0479372485441102, "acc_norm": 0.65, "acc_norm_stderr": 0.0479372485441102 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.690566037735849, "acc_stderr": 0.028450154794118637, "acc_norm": 0.690566037735849, "acc_norm_stderr": 0.028450154794118637 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7847222222222222, "acc_stderr": 0.03437079344106135, "acc_norm": 0.7847222222222222, "acc_norm_stderr": 0.03437079344106135 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.49, "acc_stderr": 0.05024183937956912, "acc_norm": 0.49, "acc_norm_stderr": 0.05024183937956912 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.55, "acc_stderr": 0.049999999999999996, "acc_norm": 0.55, "acc_norm_stderr": 0.049999999999999996 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.32, "acc_stderr": 0.04688261722621504, "acc_norm": 0.32, "acc_norm_stderr": 0.04688261722621504 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6763005780346821, "acc_stderr": 0.0356760379963917, "acc_norm": 0.6763005780346821, "acc_norm_stderr": 0.0356760379963917 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.45098039215686275, "acc_stderr": 0.049512182523962625, "acc_norm": 0.45098039215686275, "acc_norm_stderr": 0.049512182523962625 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.75, "acc_stderr": 0.04351941398892446, "acc_norm": 0.75, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5872340425531914, "acc_stderr": 0.03218471141400352, "acc_norm": 0.5872340425531914, "acc_norm_stderr": 0.03218471141400352 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.5087719298245614, "acc_stderr": 0.04702880432049615, "acc_norm": 0.5087719298245614, "acc_norm_stderr": 0.04702880432049615 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5793103448275863, "acc_stderr": 0.0411391498118926, "acc_norm": 0.5793103448275863, "acc_norm_stderr": 0.0411391498118926 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.3968253968253968, "acc_stderr": 0.025197101074246483, "acc_norm": 0.3968253968253968, "acc_norm_stderr": 0.025197101074246483 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.5158730158730159, "acc_stderr": 0.044698818540726076, "acc_norm": 0.5158730158730159, "acc_norm_stderr": 0.044698818540726076 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.24, "acc_stderr": 0.04292346959909282, "acc_norm": 0.24, "acc_norm_stderr": 0.04292346959909282 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7935483870967742, "acc_stderr": 0.023025899617188716, "acc_norm": 0.7935483870967742, "acc_norm_stderr": 0.023025899617188716 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.4876847290640394, "acc_stderr": 0.035169204442208966, "acc_norm": 0.4876847290640394, "acc_norm_stderr": 0.035169204442208966 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.74, "acc_stderr": 0.044084400227680794, "acc_norm": 0.74, "acc_norm_stderr": 0.044084400227680794 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.793939393939394, "acc_stderr": 0.0315841532404771, "acc_norm": 0.793939393939394, "acc_norm_stderr": 0.0315841532404771 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7929292929292929, "acc_stderr": 0.028869778460267042, "acc_norm": 0.7929292929292929, "acc_norm_stderr": 0.028869778460267042 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.9015544041450777, "acc_stderr": 0.02150024957603348, "acc_norm": 0.9015544041450777, "acc_norm_stderr": 0.02150024957603348 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6897435897435897, "acc_stderr": 0.02345467488940429, "acc_norm": 0.6897435897435897, "acc_norm_stderr": 0.02345467488940429 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.3333333333333333, "acc_stderr": 0.02874204090394848, "acc_norm": 0.3333333333333333, "acc_norm_stderr": 0.02874204090394848 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6890756302521008, "acc_stderr": 0.030066761582977934, "acc_norm": 0.6890756302521008, "acc_norm_stderr": 0.030066761582977934 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3443708609271523, "acc_stderr": 0.038796870240733264, "acc_norm": 0.3443708609271523, "acc_norm_stderr": 0.038796870240733264 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8513761467889909, "acc_stderr": 0.015251253773660836, "acc_norm": 0.8513761467889909, "acc_norm_stderr": 0.015251253773660836 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5509259259259259, "acc_stderr": 0.033922384053216174, "acc_norm": 0.5509259259259259, "acc_norm_stderr": 0.033922384053216174 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8431372549019608, "acc_stderr": 0.025524722324553353, "acc_norm": 0.8431372549019608, "acc_norm_stderr": 0.025524722324553353 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.8227848101265823, "acc_stderr": 0.024856364184503234, "acc_norm": 0.8227848101265823, "acc_norm_stderr": 0.024856364184503234 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.7219730941704036, "acc_stderr": 0.030069584874494036, "acc_norm": 0.7219730941704036, "acc_norm_stderr": 0.030069584874494036 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7633587786259542, "acc_stderr": 0.03727673575596914, "acc_norm": 0.7633587786259542, "acc_norm_stderr": 0.03727673575596914 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7851239669421488, "acc_stderr": 0.03749492448709695, "acc_norm": 0.7851239669421488, "acc_norm_stderr": 0.03749492448709695 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7777777777777778, "acc_stderr": 0.0401910747255735, "acc_norm": 0.7777777777777778, "acc_norm_stderr": 0.0401910747255735 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7914110429447853, "acc_stderr": 0.031921934489347235, "acc_norm": 0.7914110429447853, "acc_norm_stderr": 0.031921934489347235 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.4642857142857143, "acc_stderr": 0.04733667890053756, "acc_norm": 0.4642857142857143, "acc_norm_stderr": 0.04733667890053756 }, "harness|hendrycksTest-management|5": { "acc": 0.8252427184466019, "acc_stderr": 0.03760178006026621, "acc_norm": 0.8252427184466019, "acc_norm_stderr": 0.03760178006026621 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8931623931623932, "acc_stderr": 0.020237149008990925, "acc_norm": 0.8931623931623932, "acc_norm_stderr": 0.020237149008990925 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.7, "acc_stderr": 0.046056618647183814, "acc_norm": 0.7, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8250319284802043, "acc_stderr": 0.013586619219903338, "acc_norm": 0.8250319284802043, "acc_norm_stderr": 0.013586619219903338 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7369942196531792, "acc_stderr": 0.023703099525258176, "acc_norm": 0.7369942196531792, "acc_norm_stderr": 0.023703099525258176 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.36983240223463687, "acc_stderr": 0.016145881256056215, "acc_norm": 0.36983240223463687, "acc_norm_stderr": 0.016145881256056215 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7287581699346405, "acc_stderr": 0.02545775669666788, "acc_norm": 0.7287581699346405, "acc_norm_stderr": 0.02545775669666788 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7138263665594855, "acc_stderr": 0.02567025924218893, "acc_norm": 0.7138263665594855, "acc_norm_stderr": 0.02567025924218893 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7314814814814815, "acc_stderr": 0.02465968518596728, "acc_norm": 0.7314814814814815, "acc_norm_stderr": 0.02465968518596728 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.48936170212765956, "acc_stderr": 0.029820747191422473, "acc_norm": 0.48936170212765956, "acc_norm_stderr": 0.029820747191422473 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4830508474576271, "acc_stderr": 0.012762896889210855, "acc_norm": 0.4830508474576271, "acc_norm_stderr": 0.012762896889210855 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.7058823529411765, "acc_stderr": 0.027678468642144714, "acc_norm": 0.7058823529411765, "acc_norm_stderr": 0.027678468642144714 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6748366013071896, "acc_stderr": 0.01895088677080631, "acc_norm": 0.6748366013071896, "acc_norm_stderr": 0.01895088677080631 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6545454545454545, "acc_stderr": 0.04554619617541054, "acc_norm": 0.6545454545454545, "acc_norm_stderr": 0.04554619617541054 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7061224489795919, "acc_stderr": 0.02916273841024977, "acc_norm": 0.7061224489795919, "acc_norm_stderr": 0.02916273841024977 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8258706467661692, "acc_stderr": 0.026814951200421603, "acc_norm": 0.8258706467661692, "acc_norm_stderr": 0.026814951200421603 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.87, "acc_stderr": 0.033799766898963086, "acc_norm": 0.87, "acc_norm_stderr": 0.033799766898963086 }, "harness|hendrycksTest-virology|5": { "acc": 0.5180722891566265, "acc_stderr": 0.03889951252827216, "acc_norm": 0.5180722891566265, "acc_norm_stderr": 0.03889951252827216 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8304093567251462, "acc_stderr": 0.02878210810540171, "acc_norm": 0.8304093567251462, "acc_norm_stderr": 0.02878210810540171 }, "harness|truthfulqa:mc|0": { "mc1": 0.3378212974296206, "mc1_stderr": 0.01655716732251688, "mc2": 0.4992288323014176, "mc2_stderr": 0.015334932030447291 }, "harness|winogrande|5": { "acc": 0.8082083662194159, "acc_stderr": 0.011065209664659527 }, "harness|gsm8k|5": { "acc": 0.6702047005307051, "acc_stderr": 0.012949955030571149 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_SanjiWatsuki__openchat-3.5-1210-starling-slerp
[ "region:us" ]
2023-12-27T13:01:39+00:00
{"pretty_name": "Evaluation run of SanjiWatsuki/openchat-3.5-1210-starling-slerp", "dataset_summary": "Dataset automatically created during the evaluation run of model [SanjiWatsuki/openchat-3.5-1210-starling-slerp](https://huggingface.co/SanjiWatsuki/openchat-3.5-1210-starling-slerp) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_SanjiWatsuki__openchat-3.5-1210-starling-slerp\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-27T12:59:24.501037](https://huggingface.co/datasets/open-llm-leaderboard/details_SanjiWatsuki__openchat-3.5-1210-starling-slerp/blob/main/results_2023-12-27T12-59-24.501037.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6530257167647132,\n \"acc_stderr\": 0.031906035120016406,\n \"acc_norm\": 0.6537129491623167,\n \"acc_norm_stderr\": 0.03255707134476186,\n \"mc1\": 0.3378212974296206,\n \"mc1_stderr\": 0.01655716732251688,\n \"mc2\": 0.4992288323014176,\n \"mc2_stderr\": 0.015334932030447291\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6075085324232082,\n \"acc_stderr\": 0.014269634635670728,\n \"acc_norm\": 0.6390784982935154,\n \"acc_norm_stderr\": 0.014034761386175452\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6713802031467835,\n \"acc_stderr\": 0.004687514708345319,\n \"acc_norm\": 0.8527185819557856,\n \"acc_norm_stderr\": 0.003536619673019997\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6148148148148148,\n \"acc_stderr\": 0.04203921040156279,\n \"acc_norm\": 0.6148148148148148,\n \"acc_norm_stderr\": 0.04203921040156279\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6710526315789473,\n \"acc_stderr\": 0.03823428969926605,\n \"acc_norm\": 0.6710526315789473,\n \"acc_norm_stderr\": 0.03823428969926605\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.65,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.690566037735849,\n \"acc_stderr\": 0.028450154794118637,\n \"acc_norm\": 0.690566037735849,\n \"acc_norm_stderr\": 0.028450154794118637\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7847222222222222,\n \"acc_stderr\": 0.03437079344106135,\n \"acc_norm\": 0.7847222222222222,\n \"acc_norm_stderr\": 0.03437079344106135\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.55,\n \"acc_stderr\": 0.049999999999999996,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\": 0.049999999999999996\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6763005780346821,\n \"acc_stderr\": 0.0356760379963917,\n \"acc_norm\": 0.6763005780346821,\n \"acc_norm_stderr\": 0.0356760379963917\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.45098039215686275,\n \"acc_stderr\": 0.049512182523962625,\n \"acc_norm\": 0.45098039215686275,\n \"acc_norm_stderr\": 0.049512182523962625\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5872340425531914,\n \"acc_stderr\": 0.03218471141400352,\n \"acc_norm\": 0.5872340425531914,\n \"acc_norm_stderr\": 0.03218471141400352\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5087719298245614,\n \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.5087719298245614,\n \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5793103448275863,\n \"acc_stderr\": 0.0411391498118926,\n \"acc_norm\": 0.5793103448275863,\n \"acc_norm_stderr\": 0.0411391498118926\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3968253968253968,\n \"acc_stderr\": 0.025197101074246483,\n \"acc_norm\": 0.3968253968253968,\n \"acc_norm_stderr\": 0.025197101074246483\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5158730158730159,\n \"acc_stderr\": 0.044698818540726076,\n \"acc_norm\": 0.5158730158730159,\n \"acc_norm_stderr\": 0.044698818540726076\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.24,\n \"acc_stderr\": 0.04292346959909282,\n \"acc_norm\": 0.24,\n \"acc_norm_stderr\": 0.04292346959909282\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7935483870967742,\n \"acc_stderr\": 0.023025899617188716,\n \"acc_norm\": 0.7935483870967742,\n \"acc_norm_stderr\": 0.023025899617188716\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.4876847290640394,\n \"acc_stderr\": 0.035169204442208966,\n \"acc_norm\": 0.4876847290640394,\n \"acc_norm_stderr\": 0.035169204442208966\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.044084400227680794,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.044084400227680794\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.793939393939394,\n \"acc_stderr\": 0.0315841532404771,\n \"acc_norm\": 0.793939393939394,\n \"acc_norm_stderr\": 0.0315841532404771\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7929292929292929,\n \"acc_stderr\": 0.028869778460267042,\n \"acc_norm\": 0.7929292929292929,\n \"acc_norm_stderr\": 0.028869778460267042\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9015544041450777,\n \"acc_stderr\": 0.02150024957603348,\n \"acc_norm\": 0.9015544041450777,\n \"acc_norm_stderr\": 0.02150024957603348\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6897435897435897,\n \"acc_stderr\": 0.02345467488940429,\n \"acc_norm\": 0.6897435897435897,\n \"acc_norm_stderr\": 0.02345467488940429\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3333333333333333,\n \"acc_stderr\": 0.02874204090394848,\n \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.02874204090394848\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6890756302521008,\n \"acc_stderr\": 0.030066761582977934,\n \"acc_norm\": 0.6890756302521008,\n \"acc_norm_stderr\": 0.030066761582977934\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3443708609271523,\n \"acc_stderr\": 0.038796870240733264,\n \"acc_norm\": 0.3443708609271523,\n \"acc_norm_stderr\": 0.038796870240733264\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8513761467889909,\n \"acc_stderr\": 0.015251253773660836,\n \"acc_norm\": 0.8513761467889909,\n \"acc_norm_stderr\": 0.015251253773660836\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5509259259259259,\n \"acc_stderr\": 0.033922384053216174,\n \"acc_norm\": 0.5509259259259259,\n \"acc_norm_stderr\": 0.033922384053216174\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8431372549019608,\n \"acc_stderr\": 0.025524722324553353,\n \"acc_norm\": 0.8431372549019608,\n \"acc_norm_stderr\": 0.025524722324553353\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8227848101265823,\n \"acc_stderr\": 0.024856364184503234,\n \"acc_norm\": 0.8227848101265823,\n \"acc_norm_stderr\": 0.024856364184503234\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7219730941704036,\n \"acc_stderr\": 0.030069584874494036,\n \"acc_norm\": 0.7219730941704036,\n \"acc_norm_stderr\": 0.030069584874494036\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7633587786259542,\n \"acc_stderr\": 0.03727673575596914,\n \"acc_norm\": 0.7633587786259542,\n \"acc_norm_stderr\": 0.03727673575596914\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7851239669421488,\n \"acc_stderr\": 0.03749492448709695,\n \"acc_norm\": 0.7851239669421488,\n \"acc_norm_stderr\": 0.03749492448709695\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.0401910747255735,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.0401910747255735\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7914110429447853,\n \"acc_stderr\": 0.031921934489347235,\n \"acc_norm\": 0.7914110429447853,\n \"acc_norm_stderr\": 0.031921934489347235\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4642857142857143,\n \"acc_stderr\": 0.04733667890053756,\n \"acc_norm\": 0.4642857142857143,\n \"acc_norm_stderr\": 0.04733667890053756\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8252427184466019,\n \"acc_stderr\": 0.03760178006026621,\n \"acc_norm\": 0.8252427184466019,\n \"acc_norm_stderr\": 0.03760178006026621\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8931623931623932,\n \"acc_stderr\": 0.020237149008990925,\n \"acc_norm\": 0.8931623931623932,\n \"acc_norm_stderr\": 0.020237149008990925\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8250319284802043,\n \"acc_stderr\": 0.013586619219903338,\n \"acc_norm\": 0.8250319284802043,\n \"acc_norm_stderr\": 0.013586619219903338\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7369942196531792,\n \"acc_stderr\": 0.023703099525258176,\n \"acc_norm\": 0.7369942196531792,\n \"acc_norm_stderr\": 0.023703099525258176\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.36983240223463687,\n \"acc_stderr\": 0.016145881256056215,\n \"acc_norm\": 0.36983240223463687,\n \"acc_norm_stderr\": 0.016145881256056215\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7287581699346405,\n \"acc_stderr\": 0.02545775669666788,\n \"acc_norm\": 0.7287581699346405,\n \"acc_norm_stderr\": 0.02545775669666788\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7138263665594855,\n \"acc_stderr\": 0.02567025924218893,\n \"acc_norm\": 0.7138263665594855,\n \"acc_norm_stderr\": 0.02567025924218893\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7314814814814815,\n \"acc_stderr\": 0.02465968518596728,\n \"acc_norm\": 0.7314814814814815,\n \"acc_norm_stderr\": 0.02465968518596728\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.48936170212765956,\n \"acc_stderr\": 0.029820747191422473,\n \"acc_norm\": 0.48936170212765956,\n \"acc_norm_stderr\": 0.029820747191422473\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4830508474576271,\n \"acc_stderr\": 0.012762896889210855,\n \"acc_norm\": 0.4830508474576271,\n \"acc_norm_stderr\": 0.012762896889210855\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7058823529411765,\n \"acc_stderr\": 0.027678468642144714,\n \"acc_norm\": 0.7058823529411765,\n \"acc_norm_stderr\": 0.027678468642144714\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6748366013071896,\n \"acc_stderr\": 0.01895088677080631,\n \"acc_norm\": 0.6748366013071896,\n \"acc_norm_stderr\": 0.01895088677080631\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6545454545454545,\n \"acc_stderr\": 0.04554619617541054,\n \"acc_norm\": 0.6545454545454545,\n \"acc_norm_stderr\": 0.04554619617541054\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7061224489795919,\n \"acc_stderr\": 0.02916273841024977,\n \"acc_norm\": 0.7061224489795919,\n \"acc_norm_stderr\": 0.02916273841024977\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8258706467661692,\n \"acc_stderr\": 0.026814951200421603,\n \"acc_norm\": 0.8258706467661692,\n \"acc_norm_stderr\": 0.026814951200421603\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.87,\n \"acc_stderr\": 0.033799766898963086,\n \"acc_norm\": 0.87,\n \"acc_norm_stderr\": 0.033799766898963086\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5180722891566265,\n \"acc_stderr\": 0.03889951252827216,\n \"acc_norm\": 0.5180722891566265,\n \"acc_norm_stderr\": 0.03889951252827216\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8304093567251462,\n \"acc_stderr\": 0.02878210810540171,\n \"acc_norm\": 0.8304093567251462,\n \"acc_norm_stderr\": 0.02878210810540171\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3378212974296206,\n \"mc1_stderr\": 0.01655716732251688,\n \"mc2\": 0.4992288323014176,\n \"mc2_stderr\": 0.015334932030447291\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8082083662194159,\n \"acc_stderr\": 0.011065209664659527\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6702047005307051,\n \"acc_stderr\": 0.012949955030571149\n }\n}\n```", "repo_url": "https://huggingface.co/SanjiWatsuki/openchat-3.5-1210-starling-slerp", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|arc:challenge|25_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|gsm8k|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hellaswag|10_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-27T12-59-24.501037.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["**/details_harness|winogrande|5_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-27T12-59-24.501037.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_27T12_59_24.501037", "path": ["results_2023-12-27T12-59-24.501037.parquet"]}, {"split": "latest", "path": ["results_2023-12-27T12-59-24.501037.parquet"]}]}]}
2023-12-27T13:02:01+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of SanjiWatsuki/openchat-3.5-1210-starling-slerp Dataset automatically created during the evaluation run of model SanjiWatsuki/openchat-3.5-1210-starling-slerp on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-27T12:59:24.501037(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of SanjiWatsuki/openchat-3.5-1210-starling-slerp\n\n\n\nDataset automatically created during the evaluation run of model SanjiWatsuki/openchat-3.5-1210-starling-slerp on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-27T12:59:24.501037(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of SanjiWatsuki/openchat-3.5-1210-starling-slerp\n\n\n\nDataset automatically created during the evaluation run of model SanjiWatsuki/openchat-3.5-1210-starling-slerp on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-27T12:59:24.501037(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 197, 67, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of SanjiWatsuki/openchat-3.5-1210-starling-slerp\n\n\n\nDataset automatically created during the evaluation run of model SanjiWatsuki/openchat-3.5-1210-starling-slerp on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-27T12:59:24.501037(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]" ]
135ec2a89eb2e5a50875d43cbe851cdd0ea10bad
# Dataset Card for Evaluation run of Zangs3011/gpt2_137m_DolphinCoder <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [Zangs3011/gpt2_137m_DolphinCoder](https://huggingface.co/Zangs3011/gpt2_137m_DolphinCoder) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Zangs3011__gpt2_137m_DolphinCoder", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-27T13:03:11.730792](https://huggingface.co/datasets/open-llm-leaderboard/details_Zangs3011__gpt2_137m_DolphinCoder/blob/main/results_2023-12-27T13-03-11.730792.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.2541058154915133, "acc_stderr": 0.030552087768393632, "acc_norm": 0.2544491269494238, "acc_norm_stderr": 0.03131084218129606, "mc1": 0.2252141982864137, "mc1_stderr": 0.014623240768023496, "mc2": 0.41575126598869544, "mc2_stderr": 0.015079894627974334 }, "harness|arc:challenge|25": { "acc": 0.19795221843003413, "acc_stderr": 0.011643990971573395, "acc_norm": 0.21843003412969283, "acc_norm_stderr": 0.012074291605700983 }, "harness|hellaswag|10": { "acc": 0.29117705636327423, "acc_stderr": 0.00453376468621199, "acc_norm": 0.3134833698466441, "acc_norm_stderr": 0.004629608863272312 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.22, "acc_stderr": 0.04163331998932268, "acc_norm": 0.22, "acc_norm_stderr": 0.04163331998932268 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.2814814814814815, "acc_stderr": 0.038850042458002554, "acc_norm": 0.2814814814814815, "acc_norm_stderr": 0.038850042458002554 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.17763157894736842, "acc_stderr": 0.031103182383123398, "acc_norm": 0.17763157894736842, "acc_norm_stderr": 0.031103182383123398 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.19, "acc_stderr": 0.03942772444036623, "acc_norm": 0.19, "acc_norm_stderr": 0.03942772444036623 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.22641509433962265, "acc_stderr": 0.025757559893106737, "acc_norm": 0.22641509433962265, "acc_norm_stderr": 0.025757559893106737 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.2152777777777778, "acc_stderr": 0.034370793441061344, "acc_norm": 0.2152777777777778, "acc_norm_stderr": 0.034370793441061344 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.22, "acc_stderr": 0.04163331998932269, "acc_norm": 0.22, "acc_norm_stderr": 0.04163331998932269 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.24, "acc_stderr": 0.04292346959909281, "acc_norm": 0.24, "acc_norm_stderr": 0.04292346959909281 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.22, "acc_stderr": 0.04163331998932269, "acc_norm": 0.22, "acc_norm_stderr": 0.04163331998932269 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.24855491329479767, "acc_stderr": 0.03295304696818318, "acc_norm": 0.24855491329479767, "acc_norm_stderr": 0.03295304696818318 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.2549019607843137, "acc_stderr": 0.043364327079931785, "acc_norm": 0.2549019607843137, "acc_norm_stderr": 0.043364327079931785 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.19, "acc_stderr": 0.03942772444036622, "acc_norm": 0.19, "acc_norm_stderr": 0.03942772444036622 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.26382978723404255, "acc_stderr": 0.028809989854102973, "acc_norm": 0.26382978723404255, "acc_norm_stderr": 0.028809989854102973 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.23684210526315788, "acc_stderr": 0.03999423879281336, "acc_norm": 0.23684210526315788, "acc_norm_stderr": 0.03999423879281336 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.3103448275862069, "acc_stderr": 0.038552896163789485, "acc_norm": 0.3103448275862069, "acc_norm_stderr": 0.038552896163789485 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.24074074074074073, "acc_stderr": 0.022019080012217893, "acc_norm": 0.24074074074074073, "acc_norm_stderr": 0.022019080012217893 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.1349206349206349, "acc_stderr": 0.030557101589417508, "acc_norm": 0.1349206349206349, "acc_norm_stderr": 0.030557101589417508 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.18, "acc_stderr": 0.038612291966536934, "acc_norm": 0.18, "acc_norm_stderr": 0.038612291966536934 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.2129032258064516, "acc_stderr": 0.023287665127268552, "acc_norm": 0.2129032258064516, "acc_norm_stderr": 0.023287665127268552 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.21674876847290642, "acc_stderr": 0.028990331252516235, "acc_norm": 0.21674876847290642, "acc_norm_stderr": 0.028990331252516235 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.32, "acc_stderr": 0.04688261722621504, "acc_norm": 0.32, "acc_norm_stderr": 0.04688261722621504 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.21212121212121213, "acc_stderr": 0.03192271569548299, "acc_norm": 0.21212121212121213, "acc_norm_stderr": 0.03192271569548299 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.35353535353535354, "acc_stderr": 0.03406086723547153, "acc_norm": 0.35353535353535354, "acc_norm_stderr": 0.03406086723547153 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.36787564766839376, "acc_stderr": 0.03480175668466036, "acc_norm": 0.36787564766839376, "acc_norm_stderr": 0.03480175668466036 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.2230769230769231, "acc_stderr": 0.02110773012724399, "acc_norm": 0.2230769230769231, "acc_norm_stderr": 0.02110773012724399 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.2518518518518518, "acc_stderr": 0.026466117538959916, "acc_norm": 0.2518518518518518, "acc_norm_stderr": 0.026466117538959916 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.24369747899159663, "acc_stderr": 0.02788682807838056, "acc_norm": 0.24369747899159663, "acc_norm_stderr": 0.02788682807838056 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.2119205298013245, "acc_stderr": 0.033367670865679766, "acc_norm": 0.2119205298013245, "acc_norm_stderr": 0.033367670865679766 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.3467889908256881, "acc_stderr": 0.020406097104093027, "acc_norm": 0.3467889908256881, "acc_norm_stderr": 0.020406097104093027 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.4675925925925926, "acc_stderr": 0.03402801581358966, "acc_norm": 0.4675925925925926, "acc_norm_stderr": 0.03402801581358966 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.24019607843137256, "acc_stderr": 0.02998373305591361, "acc_norm": 0.24019607843137256, "acc_norm_stderr": 0.02998373305591361 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.2869198312236287, "acc_stderr": 0.029443773022594693, "acc_norm": 0.2869198312236287, "acc_norm_stderr": 0.029443773022594693 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.25112107623318386, "acc_stderr": 0.029105220833224633, "acc_norm": 0.25112107623318386, "acc_norm_stderr": 0.029105220833224633 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.2595419847328244, "acc_stderr": 0.03844876139785271, "acc_norm": 0.2595419847328244, "acc_norm_stderr": 0.03844876139785271 }, "harness|hendrycksTest-international_law|5": { "acc": 0.23140495867768596, "acc_stderr": 0.03849856098794089, "acc_norm": 0.23140495867768596, "acc_norm_stderr": 0.03849856098794089 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.25925925925925924, "acc_stderr": 0.042365112580946336, "acc_norm": 0.25925925925925924, "acc_norm_stderr": 0.042365112580946336 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.3128834355828221, "acc_stderr": 0.036429145782924055, "acc_norm": 0.3128834355828221, "acc_norm_stderr": 0.036429145782924055 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.17857142857142858, "acc_stderr": 0.03635209121577806, "acc_norm": 0.17857142857142858, "acc_norm_stderr": 0.03635209121577806 }, "harness|hendrycksTest-management|5": { "acc": 0.3592233009708738, "acc_stderr": 0.04750458399041692, "acc_norm": 0.3592233009708738, "acc_norm_stderr": 0.04750458399041692 }, "harness|hendrycksTest-marketing|5": { "acc": 0.26495726495726496, "acc_stderr": 0.02891120880274949, "acc_norm": 0.26495726495726496, "acc_norm_stderr": 0.02891120880274949 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.31, "acc_stderr": 0.04648231987117316, "acc_norm": 0.31, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.23371647509578544, "acc_stderr": 0.015133383278988844, "acc_norm": 0.23371647509578544, "acc_norm_stderr": 0.015133383278988844 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.22832369942196531, "acc_stderr": 0.022598703804321628, "acc_norm": 0.22832369942196531, "acc_norm_stderr": 0.022598703804321628 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.24916201117318434, "acc_stderr": 0.014465893829859924, "acc_norm": 0.24916201117318434, "acc_norm_stderr": 0.014465893829859924 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.24836601307189543, "acc_stderr": 0.02473998135511359, "acc_norm": 0.24836601307189543, "acc_norm_stderr": 0.02473998135511359 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.19935691318327975, "acc_stderr": 0.022691033780549656, "acc_norm": 0.19935691318327975, "acc_norm_stderr": 0.022691033780549656 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.2222222222222222, "acc_stderr": 0.023132376234543343, "acc_norm": 0.2222222222222222, "acc_norm_stderr": 0.023132376234543343 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.2553191489361702, "acc_stderr": 0.026011992930902, "acc_norm": 0.2553191489361702, "acc_norm_stderr": 0.026011992930902 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.25358539765319427, "acc_stderr": 0.011111715336101143, "acc_norm": 0.25358539765319427, "acc_norm_stderr": 0.011111715336101143 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.4485294117647059, "acc_stderr": 0.030211479609121593, "acc_norm": 0.4485294117647059, "acc_norm_stderr": 0.030211479609121593 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.2630718954248366, "acc_stderr": 0.01781267654232065, "acc_norm": 0.2630718954248366, "acc_norm_stderr": 0.01781267654232065 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.2, "acc_stderr": 0.03831305140884603, "acc_norm": 0.2, "acc_norm_stderr": 0.03831305140884603 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.3673469387755102, "acc_stderr": 0.030862144921087558, "acc_norm": 0.3673469387755102, "acc_norm_stderr": 0.030862144921087558 }, "harness|hendrycksTest-sociology|5": { "acc": 0.208955223880597, "acc_stderr": 0.028748298931728655, "acc_norm": 0.208955223880597, "acc_norm_stderr": 0.028748298931728655 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.26, "acc_stderr": 0.04408440022768079, "acc_norm": 0.26, "acc_norm_stderr": 0.04408440022768079 }, "harness|hendrycksTest-virology|5": { "acc": 0.20481927710843373, "acc_stderr": 0.03141784291663926, "acc_norm": 0.20481927710843373, "acc_norm_stderr": 0.03141784291663926 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.2046783625730994, "acc_stderr": 0.030944459778533207, "acc_norm": 0.2046783625730994, "acc_norm_stderr": 0.030944459778533207 }, "harness|truthfulqa:mc|0": { "mc1": 0.2252141982864137, "mc1_stderr": 0.014623240768023496, "mc2": 0.41575126598869544, "mc2_stderr": 0.015079894627974334 }, "harness|winogrande|5": { "acc": 0.5201262825572218, "acc_stderr": 0.01404109666434433 }, "harness|gsm8k|5": { "acc": 0.01061410159211524, "acc_stderr": 0.002822713322387704 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_Zangs3011__gpt2_137m_DolphinCoder
[ "region:us" ]
2023-12-27T13:04:35+00:00
{"pretty_name": "Evaluation run of Zangs3011/gpt2_137m_DolphinCoder", "dataset_summary": "Dataset automatically created during the evaluation run of model [Zangs3011/gpt2_137m_DolphinCoder](https://huggingface.co/Zangs3011/gpt2_137m_DolphinCoder) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Zangs3011__gpt2_137m_DolphinCoder\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-27T13:03:11.730792](https://huggingface.co/datasets/open-llm-leaderboard/details_Zangs3011__gpt2_137m_DolphinCoder/blob/main/results_2023-12-27T13-03-11.730792.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.2541058154915133,\n \"acc_stderr\": 0.030552087768393632,\n \"acc_norm\": 0.2544491269494238,\n \"acc_norm_stderr\": 0.03131084218129606,\n \"mc1\": 0.2252141982864137,\n \"mc1_stderr\": 0.014623240768023496,\n \"mc2\": 0.41575126598869544,\n \"mc2_stderr\": 0.015079894627974334\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.19795221843003413,\n \"acc_stderr\": 0.011643990971573395,\n \"acc_norm\": 0.21843003412969283,\n \"acc_norm_stderr\": 0.012074291605700983\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.29117705636327423,\n \"acc_stderr\": 0.00453376468621199,\n \"acc_norm\": 0.3134833698466441,\n \"acc_norm_stderr\": 0.004629608863272312\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.22,\n \"acc_stderr\": 0.04163331998932268,\n \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.04163331998932268\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.2814814814814815,\n \"acc_stderr\": 0.038850042458002554,\n \"acc_norm\": 0.2814814814814815,\n \"acc_norm_stderr\": 0.038850042458002554\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.17763157894736842,\n \"acc_stderr\": 0.031103182383123398,\n \"acc_norm\": 0.17763157894736842,\n \"acc_norm_stderr\": 0.031103182383123398\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.19,\n \"acc_stderr\": 0.03942772444036623,\n \"acc_norm\": 0.19,\n \"acc_norm_stderr\": 0.03942772444036623\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.22641509433962265,\n \"acc_stderr\": 0.025757559893106737,\n \"acc_norm\": 0.22641509433962265,\n \"acc_norm_stderr\": 0.025757559893106737\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2152777777777778,\n \"acc_stderr\": 0.034370793441061344,\n \"acc_norm\": 0.2152777777777778,\n \"acc_norm_stderr\": 0.034370793441061344\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.22,\n \"acc_stderr\": 0.04163331998932269,\n \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.04163331998932269\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.24,\n \"acc_stderr\": 0.04292346959909281,\n \"acc_norm\": 0.24,\n \"acc_norm_stderr\": 0.04292346959909281\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.22,\n \"acc_stderr\": 0.04163331998932269,\n \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.04163331998932269\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.24855491329479767,\n \"acc_stderr\": 0.03295304696818318,\n \"acc_norm\": 0.24855491329479767,\n \"acc_norm_stderr\": 0.03295304696818318\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.2549019607843137,\n \"acc_stderr\": 0.043364327079931785,\n \"acc_norm\": 0.2549019607843137,\n \"acc_norm_stderr\": 0.043364327079931785\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.19,\n \"acc_stderr\": 0.03942772444036622,\n \"acc_norm\": 0.19,\n \"acc_norm_stderr\": 0.03942772444036622\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.26382978723404255,\n \"acc_stderr\": 0.028809989854102973,\n \"acc_norm\": 0.26382978723404255,\n \"acc_norm_stderr\": 0.028809989854102973\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.23684210526315788,\n \"acc_stderr\": 0.03999423879281336,\n \"acc_norm\": 0.23684210526315788,\n \"acc_norm_stderr\": 0.03999423879281336\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.3103448275862069,\n \"acc_stderr\": 0.038552896163789485,\n \"acc_norm\": 0.3103448275862069,\n \"acc_norm_stderr\": 0.038552896163789485\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.24074074074074073,\n \"acc_stderr\": 0.022019080012217893,\n \"acc_norm\": 0.24074074074074073,\n \"acc_norm_stderr\": 0.022019080012217893\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.1349206349206349,\n \"acc_stderr\": 0.030557101589417508,\n \"acc_norm\": 0.1349206349206349,\n \"acc_norm_stderr\": 0.030557101589417508\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.18,\n \"acc_stderr\": 0.038612291966536934,\n \"acc_norm\": 0.18,\n \"acc_norm_stderr\": 0.038612291966536934\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.2129032258064516,\n \"acc_stderr\": 0.023287665127268552,\n \"acc_norm\": 0.2129032258064516,\n \"acc_norm_stderr\": 0.023287665127268552\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.21674876847290642,\n \"acc_stderr\": 0.028990331252516235,\n \"acc_norm\": 0.21674876847290642,\n \"acc_norm_stderr\": 0.028990331252516235\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.21212121212121213,\n \"acc_stderr\": 0.03192271569548299,\n \"acc_norm\": 0.21212121212121213,\n \"acc_norm_stderr\": 0.03192271569548299\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.35353535353535354,\n \"acc_stderr\": 0.03406086723547153,\n \"acc_norm\": 0.35353535353535354,\n \"acc_norm_stderr\": 0.03406086723547153\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.36787564766839376,\n \"acc_stderr\": 0.03480175668466036,\n \"acc_norm\": 0.36787564766839376,\n \"acc_norm_stderr\": 0.03480175668466036\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.2230769230769231,\n \"acc_stderr\": 0.02110773012724399,\n \"acc_norm\": 0.2230769230769231,\n \"acc_norm_stderr\": 0.02110773012724399\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.2518518518518518,\n \"acc_stderr\": 0.026466117538959916,\n \"acc_norm\": 0.2518518518518518,\n \"acc_norm_stderr\": 0.026466117538959916\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.24369747899159663,\n \"acc_stderr\": 0.02788682807838056,\n \"acc_norm\": 0.24369747899159663,\n \"acc_norm_stderr\": 0.02788682807838056\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.2119205298013245,\n \"acc_stderr\": 0.033367670865679766,\n \"acc_norm\": 0.2119205298013245,\n \"acc_norm_stderr\": 0.033367670865679766\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.3467889908256881,\n \"acc_stderr\": 0.020406097104093027,\n \"acc_norm\": 0.3467889908256881,\n \"acc_norm_stderr\": 0.020406097104093027\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4675925925925926,\n \"acc_stderr\": 0.03402801581358966,\n \"acc_norm\": 0.4675925925925926,\n \"acc_norm_stderr\": 0.03402801581358966\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.24019607843137256,\n \"acc_stderr\": 0.02998373305591361,\n \"acc_norm\": 0.24019607843137256,\n \"acc_norm_stderr\": 0.02998373305591361\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.2869198312236287,\n \"acc_stderr\": 0.029443773022594693,\n \"acc_norm\": 0.2869198312236287,\n \"acc_norm_stderr\": 0.029443773022594693\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.25112107623318386,\n \"acc_stderr\": 0.029105220833224633,\n \"acc_norm\": 0.25112107623318386,\n \"acc_norm_stderr\": 0.029105220833224633\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.2595419847328244,\n \"acc_stderr\": 0.03844876139785271,\n \"acc_norm\": 0.2595419847328244,\n \"acc_norm_stderr\": 0.03844876139785271\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.23140495867768596,\n \"acc_stderr\": 0.03849856098794089,\n \"acc_norm\": 0.23140495867768596,\n \"acc_norm_stderr\": 0.03849856098794089\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.25925925925925924,\n \"acc_stderr\": 0.042365112580946336,\n \"acc_norm\": 0.25925925925925924,\n \"acc_norm_stderr\": 0.042365112580946336\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.3128834355828221,\n \"acc_stderr\": 0.036429145782924055,\n \"acc_norm\": 0.3128834355828221,\n \"acc_norm_stderr\": 0.036429145782924055\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.17857142857142858,\n \"acc_stderr\": 0.03635209121577806,\n \"acc_norm\": 0.17857142857142858,\n \"acc_norm_stderr\": 0.03635209121577806\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.3592233009708738,\n \"acc_stderr\": 0.04750458399041692,\n \"acc_norm\": 0.3592233009708738,\n \"acc_norm_stderr\": 0.04750458399041692\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.26495726495726496,\n \"acc_stderr\": 0.02891120880274949,\n \"acc_norm\": 0.26495726495726496,\n \"acc_norm_stderr\": 0.02891120880274949\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.23371647509578544,\n \"acc_stderr\": 0.015133383278988844,\n \"acc_norm\": 0.23371647509578544,\n \"acc_norm_stderr\": 0.015133383278988844\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.22832369942196531,\n \"acc_stderr\": 0.022598703804321628,\n \"acc_norm\": 0.22832369942196531,\n \"acc_norm_stderr\": 0.022598703804321628\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.24916201117318434,\n \"acc_stderr\": 0.014465893829859924,\n \"acc_norm\": 0.24916201117318434,\n \"acc_norm_stderr\": 0.014465893829859924\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.24836601307189543,\n \"acc_stderr\": 0.02473998135511359,\n \"acc_norm\": 0.24836601307189543,\n \"acc_norm_stderr\": 0.02473998135511359\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.19935691318327975,\n \"acc_stderr\": 0.022691033780549656,\n \"acc_norm\": 0.19935691318327975,\n \"acc_norm_stderr\": 0.022691033780549656\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.2222222222222222,\n \"acc_stderr\": 0.023132376234543343,\n \"acc_norm\": 0.2222222222222222,\n \"acc_norm_stderr\": 0.023132376234543343\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.2553191489361702,\n \"acc_stderr\": 0.026011992930902,\n \"acc_norm\": 0.2553191489361702,\n \"acc_norm_stderr\": 0.026011992930902\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.25358539765319427,\n \"acc_stderr\": 0.011111715336101143,\n \"acc_norm\": 0.25358539765319427,\n \"acc_norm_stderr\": 0.011111715336101143\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.4485294117647059,\n \"acc_stderr\": 0.030211479609121593,\n \"acc_norm\": 0.4485294117647059,\n \"acc_norm_stderr\": 0.030211479609121593\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.2630718954248366,\n \"acc_stderr\": 0.01781267654232065,\n \"acc_norm\": 0.2630718954248366,\n \"acc_norm_stderr\": 0.01781267654232065\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.2,\n \"acc_stderr\": 0.03831305140884603,\n \"acc_norm\": 0.2,\n \"acc_norm_stderr\": 0.03831305140884603\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.3673469387755102,\n \"acc_stderr\": 0.030862144921087558,\n \"acc_norm\": 0.3673469387755102,\n \"acc_norm_stderr\": 0.030862144921087558\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.208955223880597,\n \"acc_stderr\": 0.028748298931728655,\n \"acc_norm\": 0.208955223880597,\n \"acc_norm_stderr\": 0.028748298931728655\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.04408440022768079,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.04408440022768079\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.20481927710843373,\n \"acc_stderr\": 0.03141784291663926,\n \"acc_norm\": 0.20481927710843373,\n \"acc_norm_stderr\": 0.03141784291663926\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.2046783625730994,\n \"acc_stderr\": 0.030944459778533207,\n \"acc_norm\": 0.2046783625730994,\n \"acc_norm_stderr\": 0.030944459778533207\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2252141982864137,\n \"mc1_stderr\": 0.014623240768023496,\n \"mc2\": 0.41575126598869544,\n \"mc2_stderr\": 0.015079894627974334\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5201262825572218,\n \"acc_stderr\": 0.01404109666434433\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.01061410159211524,\n \"acc_stderr\": 0.002822713322387704\n }\n}\n```", "repo_url": "https://huggingface.co/Zangs3011/gpt2_137m_DolphinCoder", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|arc:challenge|25_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|gsm8k|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hellaswag|10_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-27T13-03-11.730792.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["**/details_harness|winogrande|5_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-27T13-03-11.730792.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_27T13_03_11.730792", "path": ["results_2023-12-27T13-03-11.730792.parquet"]}, {"split": "latest", "path": ["results_2023-12-27T13-03-11.730792.parquet"]}]}]}
2023-12-27T13:04:57+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of Zangs3011/gpt2_137m_DolphinCoder Dataset automatically created during the evaluation run of model Zangs3011/gpt2_137m_DolphinCoder on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-27T13:03:11.730792(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of Zangs3011/gpt2_137m_DolphinCoder\n\n\n\nDataset automatically created during the evaluation run of model Zangs3011/gpt2_137m_DolphinCoder on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-27T13:03:11.730792(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of Zangs3011/gpt2_137m_DolphinCoder\n\n\n\nDataset automatically created during the evaluation run of model Zangs3011/gpt2_137m_DolphinCoder on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-27T13:03:11.730792(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 195, 67, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Zangs3011/gpt2_137m_DolphinCoder\n\n\n\nDataset automatically created during the evaluation run of model Zangs3011/gpt2_137m_DolphinCoder on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-27T13:03:11.730792(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]" ]
2d805f6b185c94f448b037f48e87056aaed80b7a
# Dataset Card for Evaluation run of fblgit/LUNA-SOLARkrautLM-Instruct <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [fblgit/LUNA-SOLARkrautLM-Instruct](https://huggingface.co/fblgit/LUNA-SOLARkrautLM-Instruct) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_fblgit__LUNA-SOLARkrautLM-Instruct", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-27T13:04:58.261893](https://huggingface.co/datasets/open-llm-leaderboard/details_fblgit__LUNA-SOLARkrautLM-Instruct/blob/main/results_2023-12-27T13-04-58.261893.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6642541203854099, "acc_stderr": 0.0317093464542955, "acc_norm": 0.6656901555387255, "acc_norm_stderr": 0.03234983203431538, "mc1": 0.5826193390452876, "mc1_stderr": 0.017262891063272164, "mc2": 0.7336752254501507, "mc2_stderr": 0.014886399154960954 }, "harness|arc:challenge|25": { "acc": 0.6868600682593856, "acc_stderr": 0.013552671543623497, "acc_norm": 0.71160409556314, "acc_norm_stderr": 0.013238394422428173 }, "harness|hellaswag|10": { "acc": 0.7130053774148576, "acc_stderr": 0.004514345547780332, "acc_norm": 0.8827922724556861, "acc_norm_stderr": 0.003210102507177248 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.4, "acc_stderr": 0.049236596391733084, "acc_norm": 0.4, "acc_norm_stderr": 0.049236596391733084 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6222222222222222, "acc_stderr": 0.04188307537595853, "acc_norm": 0.6222222222222222, "acc_norm_stderr": 0.04188307537595853 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.743421052631579, "acc_stderr": 0.0355418036802569, "acc_norm": 0.743421052631579, "acc_norm_stderr": 0.0355418036802569 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.78, "acc_stderr": 0.04163331998932261, "acc_norm": 0.78, "acc_norm_stderr": 0.04163331998932261 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6830188679245283, "acc_stderr": 0.02863723563980089, "acc_norm": 0.6830188679245283, "acc_norm_stderr": 0.02863723563980089 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7708333333333334, "acc_stderr": 0.03514697467862388, "acc_norm": 0.7708333333333334, "acc_norm_stderr": 0.03514697467862388 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.48, "acc_stderr": 0.050211673156867795, "acc_norm": 0.48, "acc_norm_stderr": 0.050211673156867795 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.52, "acc_stderr": 0.05021167315686779, "acc_norm": 0.52, "acc_norm_stderr": 0.05021167315686779 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.27, "acc_stderr": 0.044619604333847394, "acc_norm": 0.27, "acc_norm_stderr": 0.044619604333847394 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6589595375722543, "acc_stderr": 0.03614665424180826, "acc_norm": 0.6589595375722543, "acc_norm_stderr": 0.03614665424180826 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.37254901960784315, "acc_stderr": 0.048108401480826346, "acc_norm": 0.37254901960784315, "acc_norm_stderr": 0.048108401480826346 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.74, "acc_stderr": 0.04408440022768078, "acc_norm": 0.74, "acc_norm_stderr": 0.04408440022768078 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.625531914893617, "acc_stderr": 0.03163910665367291, "acc_norm": 0.625531914893617, "acc_norm_stderr": 0.03163910665367291 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.5087719298245614, "acc_stderr": 0.04702880432049615, "acc_norm": 0.5087719298245614, "acc_norm_stderr": 0.04702880432049615 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.6413793103448275, "acc_stderr": 0.039966295748767186, "acc_norm": 0.6413793103448275, "acc_norm_stderr": 0.039966295748767186 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.4894179894179894, "acc_stderr": 0.025745542276045478, "acc_norm": 0.4894179894179894, "acc_norm_stderr": 0.025745542276045478 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.4523809523809524, "acc_stderr": 0.044518079590553275, "acc_norm": 0.4523809523809524, "acc_norm_stderr": 0.044518079590553275 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.35, "acc_stderr": 0.047937248544110196, "acc_norm": 0.35, "acc_norm_stderr": 0.047937248544110196 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.8161290322580645, "acc_stderr": 0.022037217340267826, "acc_norm": 0.8161290322580645, "acc_norm_stderr": 0.022037217340267826 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5270935960591133, "acc_stderr": 0.03512819077876106, "acc_norm": 0.5270935960591133, "acc_norm_stderr": 0.03512819077876106 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.7, "acc_stderr": 0.046056618647183814, "acc_norm": 0.7, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.806060606060606, "acc_stderr": 0.03087414513656209, "acc_norm": 0.806060606060606, "acc_norm_stderr": 0.03087414513656209 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.8585858585858586, "acc_stderr": 0.024825909793343343, "acc_norm": 0.8585858585858586, "acc_norm_stderr": 0.024825909793343343 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8808290155440415, "acc_stderr": 0.023381935348121437, "acc_norm": 0.8808290155440415, "acc_norm_stderr": 0.023381935348121437 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6692307692307692, "acc_stderr": 0.02385479568097113, "acc_norm": 0.6692307692307692, "acc_norm_stderr": 0.02385479568097113 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.37777777777777777, "acc_stderr": 0.029560707392465715, "acc_norm": 0.37777777777777777, "acc_norm_stderr": 0.029560707392465715 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.7352941176470589, "acc_stderr": 0.028657491285071973, "acc_norm": 0.7352941176470589, "acc_norm_stderr": 0.028657491285071973 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3708609271523179, "acc_stderr": 0.03943966699183629, "acc_norm": 0.3708609271523179, "acc_norm_stderr": 0.03943966699183629 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8366972477064221, "acc_stderr": 0.01584825580650157, "acc_norm": 0.8366972477064221, "acc_norm_stderr": 0.01584825580650157 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5879629629629629, "acc_stderr": 0.03356787758160831, "acc_norm": 0.5879629629629629, "acc_norm_stderr": 0.03356787758160831 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8333333333333334, "acc_stderr": 0.02615686752393104, "acc_norm": 0.8333333333333334, "acc_norm_stderr": 0.02615686752393104 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.8396624472573839, "acc_stderr": 0.02388438092596567, "acc_norm": 0.8396624472573839, "acc_norm_stderr": 0.02388438092596567 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6860986547085202, "acc_stderr": 0.03114679648297246, "acc_norm": 0.6860986547085202, "acc_norm_stderr": 0.03114679648297246 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7404580152671756, "acc_stderr": 0.03844876139785271, "acc_norm": 0.7404580152671756, "acc_norm_stderr": 0.03844876139785271 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7933884297520661, "acc_stderr": 0.03695980128098824, "acc_norm": 0.7933884297520661, "acc_norm_stderr": 0.03695980128098824 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7870370370370371, "acc_stderr": 0.0395783547198098, "acc_norm": 0.7870370370370371, "acc_norm_stderr": 0.0395783547198098 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.754601226993865, "acc_stderr": 0.03380939813943354, "acc_norm": 0.754601226993865, "acc_norm_stderr": 0.03380939813943354 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.45535714285714285, "acc_stderr": 0.04726835553719099, "acc_norm": 0.45535714285714285, "acc_norm_stderr": 0.04726835553719099 }, "harness|hendrycksTest-management|5": { "acc": 0.8446601941747572, "acc_stderr": 0.03586594738573974, "acc_norm": 0.8446601941747572, "acc_norm_stderr": 0.03586594738573974 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8547008547008547, "acc_stderr": 0.023086635086841407, "acc_norm": 0.8547008547008547, "acc_norm_stderr": 0.023086635086841407 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.71, "acc_stderr": 0.045604802157206845, "acc_norm": 0.71, "acc_norm_stderr": 0.045604802157206845 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.7956577266922095, "acc_stderr": 0.014419123980931894, "acc_norm": 0.7956577266922095, "acc_norm_stderr": 0.014419123980931894 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7427745664739884, "acc_stderr": 0.02353292543104429, "acc_norm": 0.7427745664739884, "acc_norm_stderr": 0.02353292543104429 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.34972067039106147, "acc_stderr": 0.015949308790233645, "acc_norm": 0.34972067039106147, "acc_norm_stderr": 0.015949308790233645 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7581699346405228, "acc_stderr": 0.024518195641879334, "acc_norm": 0.7581699346405228, "acc_norm_stderr": 0.024518195641879334 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7170418006430869, "acc_stderr": 0.02558306248998482, "acc_norm": 0.7170418006430869, "acc_norm_stderr": 0.02558306248998482 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7808641975308642, "acc_stderr": 0.023016705640262192, "acc_norm": 0.7808641975308642, "acc_norm_stderr": 0.023016705640262192 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.5212765957446809, "acc_stderr": 0.029800481645628693, "acc_norm": 0.5212765957446809, "acc_norm_stderr": 0.029800481645628693 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4810951760104302, "acc_stderr": 0.012761104871472655, "acc_norm": 0.4810951760104302, "acc_norm_stderr": 0.012761104871472655 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.7316176470588235, "acc_stderr": 0.026917481224377204, "acc_norm": 0.7316176470588235, "acc_norm_stderr": 0.026917481224377204 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6715686274509803, "acc_stderr": 0.018999707383162666, "acc_norm": 0.6715686274509803, "acc_norm_stderr": 0.018999707383162666 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.7090909090909091, "acc_stderr": 0.04350271442923243, "acc_norm": 0.7090909090909091, "acc_norm_stderr": 0.04350271442923243 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7551020408163265, "acc_stderr": 0.027529637440174923, "acc_norm": 0.7551020408163265, "acc_norm_stderr": 0.027529637440174923 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8507462686567164, "acc_stderr": 0.02519692987482707, "acc_norm": 0.8507462686567164, "acc_norm_stderr": 0.02519692987482707 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.88, "acc_stderr": 0.032659863237109066, "acc_norm": 0.88, "acc_norm_stderr": 0.032659863237109066 }, "harness|hendrycksTest-virology|5": { "acc": 0.5903614457831325, "acc_stderr": 0.038284011150790206, "acc_norm": 0.5903614457831325, "acc_norm_stderr": 0.038284011150790206 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.7719298245614035, "acc_stderr": 0.032180937956023566, "acc_norm": 0.7719298245614035, "acc_norm_stderr": 0.032180937956023566 }, "harness|truthfulqa:mc|0": { "mc1": 0.5826193390452876, "mc1_stderr": 0.017262891063272164, "mc2": 0.7336752254501507, "mc2_stderr": 0.014886399154960954 }, "harness|winogrande|5": { "acc": 0.829518547750592, "acc_stderr": 0.010569021122825897 }, "harness|gsm8k|5": { "acc": 0.6087945413191812, "acc_stderr": 0.0134425024027943 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_fblgit__LUNA-SOLARkrautLM-Instruct
[ "region:us" ]
2023-12-27T13:07:11+00:00
{"pretty_name": "Evaluation run of fblgit/LUNA-SOLARkrautLM-Instruct", "dataset_summary": "Dataset automatically created during the evaluation run of model [fblgit/LUNA-SOLARkrautLM-Instruct](https://huggingface.co/fblgit/LUNA-SOLARkrautLM-Instruct) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_fblgit__LUNA-SOLARkrautLM-Instruct\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-27T13:04:58.261893](https://huggingface.co/datasets/open-llm-leaderboard/details_fblgit__LUNA-SOLARkrautLM-Instruct/blob/main/results_2023-12-27T13-04-58.261893.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6642541203854099,\n \"acc_stderr\": 0.0317093464542955,\n \"acc_norm\": 0.6656901555387255,\n \"acc_norm_stderr\": 0.03234983203431538,\n \"mc1\": 0.5826193390452876,\n \"mc1_stderr\": 0.017262891063272164,\n \"mc2\": 0.7336752254501507,\n \"mc2_stderr\": 0.014886399154960954\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6868600682593856,\n \"acc_stderr\": 0.013552671543623497,\n \"acc_norm\": 0.71160409556314,\n \"acc_norm_stderr\": 0.013238394422428173\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7130053774148576,\n \"acc_stderr\": 0.004514345547780332,\n \"acc_norm\": 0.8827922724556861,\n \"acc_norm_stderr\": 0.003210102507177248\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.4,\n \"acc_stderr\": 0.049236596391733084,\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.049236596391733084\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6222222222222222,\n \"acc_stderr\": 0.04188307537595853,\n \"acc_norm\": 0.6222222222222222,\n \"acc_norm_stderr\": 0.04188307537595853\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.743421052631579,\n \"acc_stderr\": 0.0355418036802569,\n \"acc_norm\": 0.743421052631579,\n \"acc_norm_stderr\": 0.0355418036802569\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.78,\n \"acc_stderr\": 0.04163331998932261,\n \"acc_norm\": 0.78,\n \"acc_norm_stderr\": 0.04163331998932261\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6830188679245283,\n \"acc_stderr\": 0.02863723563980089,\n \"acc_norm\": 0.6830188679245283,\n \"acc_norm_stderr\": 0.02863723563980089\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7708333333333334,\n \"acc_stderr\": 0.03514697467862388,\n \"acc_norm\": 0.7708333333333334,\n \"acc_norm_stderr\": 0.03514697467862388\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.05021167315686779,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.05021167315686779\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6589595375722543,\n \"acc_stderr\": 0.03614665424180826,\n \"acc_norm\": 0.6589595375722543,\n \"acc_norm_stderr\": 0.03614665424180826\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.37254901960784315,\n \"acc_stderr\": 0.048108401480826346,\n \"acc_norm\": 0.37254901960784315,\n \"acc_norm_stderr\": 0.048108401480826346\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768078,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768078\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.625531914893617,\n \"acc_stderr\": 0.03163910665367291,\n \"acc_norm\": 0.625531914893617,\n \"acc_norm_stderr\": 0.03163910665367291\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5087719298245614,\n \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.5087719298245614,\n \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6413793103448275,\n \"acc_stderr\": 0.039966295748767186,\n \"acc_norm\": 0.6413793103448275,\n \"acc_norm_stderr\": 0.039966295748767186\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4894179894179894,\n \"acc_stderr\": 0.025745542276045478,\n \"acc_norm\": 0.4894179894179894,\n \"acc_norm_stderr\": 0.025745542276045478\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4523809523809524,\n \"acc_stderr\": 0.044518079590553275,\n \"acc_norm\": 0.4523809523809524,\n \"acc_norm_stderr\": 0.044518079590553275\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8161290322580645,\n \"acc_stderr\": 0.022037217340267826,\n \"acc_norm\": 0.8161290322580645,\n \"acc_norm_stderr\": 0.022037217340267826\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5270935960591133,\n \"acc_stderr\": 0.03512819077876106,\n \"acc_norm\": 0.5270935960591133,\n \"acc_norm_stderr\": 0.03512819077876106\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.806060606060606,\n \"acc_stderr\": 0.03087414513656209,\n \"acc_norm\": 0.806060606060606,\n \"acc_norm_stderr\": 0.03087414513656209\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8585858585858586,\n \"acc_stderr\": 0.024825909793343343,\n \"acc_norm\": 0.8585858585858586,\n \"acc_norm_stderr\": 0.024825909793343343\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8808290155440415,\n \"acc_stderr\": 0.023381935348121437,\n \"acc_norm\": 0.8808290155440415,\n \"acc_norm_stderr\": 0.023381935348121437\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6692307692307692,\n \"acc_stderr\": 0.02385479568097113,\n \"acc_norm\": 0.6692307692307692,\n \"acc_norm_stderr\": 0.02385479568097113\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.37777777777777777,\n \"acc_stderr\": 0.029560707392465715,\n \"acc_norm\": 0.37777777777777777,\n \"acc_norm_stderr\": 0.029560707392465715\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.7352941176470589,\n \"acc_stderr\": 0.028657491285071973,\n \"acc_norm\": 0.7352941176470589,\n \"acc_norm_stderr\": 0.028657491285071973\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3708609271523179,\n \"acc_stderr\": 0.03943966699183629,\n \"acc_norm\": 0.3708609271523179,\n \"acc_norm_stderr\": 0.03943966699183629\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8366972477064221,\n \"acc_stderr\": 0.01584825580650157,\n \"acc_norm\": 0.8366972477064221,\n \"acc_norm_stderr\": 0.01584825580650157\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5879629629629629,\n \"acc_stderr\": 0.03356787758160831,\n \"acc_norm\": 0.5879629629629629,\n \"acc_norm_stderr\": 0.03356787758160831\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8333333333333334,\n \"acc_stderr\": 0.02615686752393104,\n \"acc_norm\": 0.8333333333333334,\n \"acc_norm_stderr\": 0.02615686752393104\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8396624472573839,\n \"acc_stderr\": 0.02388438092596567,\n \"acc_norm\": 0.8396624472573839,\n \"acc_norm_stderr\": 0.02388438092596567\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6860986547085202,\n \"acc_stderr\": 0.03114679648297246,\n \"acc_norm\": 0.6860986547085202,\n \"acc_norm_stderr\": 0.03114679648297246\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7404580152671756,\n \"acc_stderr\": 0.03844876139785271,\n \"acc_norm\": 0.7404580152671756,\n \"acc_norm_stderr\": 0.03844876139785271\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7933884297520661,\n \"acc_stderr\": 0.03695980128098824,\n \"acc_norm\": 0.7933884297520661,\n \"acc_norm_stderr\": 0.03695980128098824\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7870370370370371,\n \"acc_stderr\": 0.0395783547198098,\n \"acc_norm\": 0.7870370370370371,\n \"acc_norm_stderr\": 0.0395783547198098\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.754601226993865,\n \"acc_stderr\": 0.03380939813943354,\n \"acc_norm\": 0.754601226993865,\n \"acc_norm_stderr\": 0.03380939813943354\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.45535714285714285,\n \"acc_stderr\": 0.04726835553719099,\n \"acc_norm\": 0.45535714285714285,\n \"acc_norm_stderr\": 0.04726835553719099\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8446601941747572,\n \"acc_stderr\": 0.03586594738573974,\n \"acc_norm\": 0.8446601941747572,\n \"acc_norm_stderr\": 0.03586594738573974\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8547008547008547,\n \"acc_stderr\": 0.023086635086841407,\n \"acc_norm\": 0.8547008547008547,\n \"acc_norm_stderr\": 0.023086635086841407\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7956577266922095,\n \"acc_stderr\": 0.014419123980931894,\n \"acc_norm\": 0.7956577266922095,\n \"acc_norm_stderr\": 0.014419123980931894\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7427745664739884,\n \"acc_stderr\": 0.02353292543104429,\n \"acc_norm\": 0.7427745664739884,\n \"acc_norm_stderr\": 0.02353292543104429\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.34972067039106147,\n \"acc_stderr\": 0.015949308790233645,\n \"acc_norm\": 0.34972067039106147,\n \"acc_norm_stderr\": 0.015949308790233645\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7581699346405228,\n \"acc_stderr\": 0.024518195641879334,\n \"acc_norm\": 0.7581699346405228,\n \"acc_norm_stderr\": 0.024518195641879334\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7170418006430869,\n \"acc_stderr\": 0.02558306248998482,\n \"acc_norm\": 0.7170418006430869,\n \"acc_norm_stderr\": 0.02558306248998482\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7808641975308642,\n \"acc_stderr\": 0.023016705640262192,\n \"acc_norm\": 0.7808641975308642,\n \"acc_norm_stderr\": 0.023016705640262192\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.5212765957446809,\n \"acc_stderr\": 0.029800481645628693,\n \"acc_norm\": 0.5212765957446809,\n \"acc_norm_stderr\": 0.029800481645628693\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4810951760104302,\n \"acc_stderr\": 0.012761104871472655,\n \"acc_norm\": 0.4810951760104302,\n \"acc_norm_stderr\": 0.012761104871472655\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7316176470588235,\n \"acc_stderr\": 0.026917481224377204,\n \"acc_norm\": 0.7316176470588235,\n \"acc_norm_stderr\": 0.026917481224377204\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6715686274509803,\n \"acc_stderr\": 0.018999707383162666,\n \"acc_norm\": 0.6715686274509803,\n \"acc_norm_stderr\": 0.018999707383162666\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.7090909090909091,\n \"acc_stderr\": 0.04350271442923243,\n \"acc_norm\": 0.7090909090909091,\n \"acc_norm_stderr\": 0.04350271442923243\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7551020408163265,\n \"acc_stderr\": 0.027529637440174923,\n \"acc_norm\": 0.7551020408163265,\n \"acc_norm_stderr\": 0.027529637440174923\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8507462686567164,\n \"acc_stderr\": 0.02519692987482707,\n \"acc_norm\": 0.8507462686567164,\n \"acc_norm_stderr\": 0.02519692987482707\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.88,\n \"acc_stderr\": 0.032659863237109066,\n \"acc_norm\": 0.88,\n \"acc_norm_stderr\": 0.032659863237109066\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5903614457831325,\n \"acc_stderr\": 0.038284011150790206,\n \"acc_norm\": 0.5903614457831325,\n \"acc_norm_stderr\": 0.038284011150790206\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7719298245614035,\n \"acc_stderr\": 0.032180937956023566,\n \"acc_norm\": 0.7719298245614035,\n \"acc_norm_stderr\": 0.032180937956023566\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5826193390452876,\n \"mc1_stderr\": 0.017262891063272164,\n \"mc2\": 0.7336752254501507,\n \"mc2_stderr\": 0.014886399154960954\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.829518547750592,\n \"acc_stderr\": 0.010569021122825897\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6087945413191812,\n \"acc_stderr\": 0.0134425024027943\n }\n}\n```", "repo_url": "https://huggingface.co/fblgit/LUNA-SOLARkrautLM-Instruct", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|arc:challenge|25_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|gsm8k|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hellaswag|10_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-27T13-04-58.261893.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["**/details_harness|winogrande|5_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-27T13-04-58.261893.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_27T13_04_58.261893", "path": ["results_2023-12-27T13-04-58.261893.parquet"]}, {"split": "latest", "path": ["results_2023-12-27T13-04-58.261893.parquet"]}]}]}
2023-12-27T13:07:32+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of fblgit/LUNA-SOLARkrautLM-Instruct Dataset automatically created during the evaluation run of model fblgit/LUNA-SOLARkrautLM-Instruct on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-27T13:04:58.261893(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of fblgit/LUNA-SOLARkrautLM-Instruct\n\n\n\nDataset automatically created during the evaluation run of model fblgit/LUNA-SOLARkrautLM-Instruct on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-27T13:04:58.261893(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of fblgit/LUNA-SOLARkrautLM-Instruct\n\n\n\nDataset automatically created during the evaluation run of model fblgit/LUNA-SOLARkrautLM-Instruct on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-27T13:04:58.261893(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 189, 67, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of fblgit/LUNA-SOLARkrautLM-Instruct\n\n\n\nDataset automatically created during the evaluation run of model fblgit/LUNA-SOLARkrautLM-Instruct on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-27T13:04:58.261893(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
5218d9413a612b165c2d3a32231755d5b6a24577
# Dataset Card for Evaluation run of SanjiWatsuki/Loyal-Macaroni-Maid-7B <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [SanjiWatsuki/Loyal-Macaroni-Maid-7B](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_SanjiWatsuki__Loyal-Macaroni-Maid-7B", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-27T13:05:27.918633](https://huggingface.co/datasets/open-llm-leaderboard/details_SanjiWatsuki__Loyal-Macaroni-Maid-7B/blob/main/results_2023-12-27T13-05-27.918633.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6523016655618005, "acc_stderr": 0.03208657966943031, "acc_norm": 0.6528749403062928, "acc_norm_stderr": 0.03274010556971135, "mc1": 0.4589963280293758, "mc1_stderr": 0.017444544447661192, "mc2": 0.6249833293230833, "mc2_stderr": 0.015381372353250218 }, "harness|arc:challenge|25": { "acc": 0.6476109215017065, "acc_stderr": 0.013960142600598677, "acc_norm": 0.6800341296928327, "acc_norm_stderr": 0.013631345807016195 }, "harness|hellaswag|10": { "acc": 0.6837283409679347, "acc_stderr": 0.004640699483543313, "acc_norm": 0.8638717386974706, "acc_norm_stderr": 0.003422238702226356 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.34, "acc_stderr": 0.04760952285695235, "acc_norm": 0.34, "acc_norm_stderr": 0.04760952285695235 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6592592592592592, "acc_stderr": 0.04094376269996792, "acc_norm": 0.6592592592592592, "acc_norm_stderr": 0.04094376269996792 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6842105263157895, "acc_stderr": 0.0378272898086547, "acc_norm": 0.6842105263157895, "acc_norm_stderr": 0.0378272898086547 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.6, "acc_stderr": 0.04923659639173309, "acc_norm": 0.6, "acc_norm_stderr": 0.04923659639173309 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.690566037735849, "acc_stderr": 0.02845015479411864, "acc_norm": 0.690566037735849, "acc_norm_stderr": 0.02845015479411864 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.75, "acc_stderr": 0.03621034121889507, "acc_norm": 0.75, "acc_norm_stderr": 0.03621034121889507 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.46, "acc_stderr": 0.05009082659620332, "acc_norm": 0.46, "acc_norm_stderr": 0.05009082659620332 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.52, "acc_stderr": 0.050211673156867795, "acc_norm": 0.52, "acc_norm_stderr": 0.050211673156867795 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.33, "acc_stderr": 0.04725815626252604, "acc_norm": 0.33, "acc_norm_stderr": 0.04725815626252604 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.653179190751445, "acc_stderr": 0.036291466701596636, "acc_norm": 0.653179190751445, "acc_norm_stderr": 0.036291466701596636 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.4411764705882353, "acc_stderr": 0.049406356306056595, "acc_norm": 0.4411764705882353, "acc_norm_stderr": 0.049406356306056595 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.77, "acc_stderr": 0.04229525846816506, "acc_norm": 0.77, "acc_norm_stderr": 0.04229525846816506 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.6085106382978723, "acc_stderr": 0.03190701242326812, "acc_norm": 0.6085106382978723, "acc_norm_stderr": 0.03190701242326812 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.5263157894736842, "acc_stderr": 0.046970851366478626, "acc_norm": 0.5263157894736842, "acc_norm_stderr": 0.046970851366478626 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5724137931034483, "acc_stderr": 0.04122737111370333, "acc_norm": 0.5724137931034483, "acc_norm_stderr": 0.04122737111370333 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.41005291005291006, "acc_stderr": 0.02533120243894443, "acc_norm": 0.41005291005291006, "acc_norm_stderr": 0.02533120243894443 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.4603174603174603, "acc_stderr": 0.04458029125470973, "acc_norm": 0.4603174603174603, "acc_norm_stderr": 0.04458029125470973 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.33, "acc_stderr": 0.047258156262526045, "acc_norm": 0.33, "acc_norm_stderr": 0.047258156262526045 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7838709677419354, "acc_stderr": 0.02341529343356852, "acc_norm": 0.7838709677419354, "acc_norm_stderr": 0.02341529343356852 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5123152709359606, "acc_stderr": 0.035169204442208966, "acc_norm": 0.5123152709359606, "acc_norm_stderr": 0.035169204442208966 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.7, "acc_stderr": 0.046056618647183814, "acc_norm": 0.7, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7818181818181819, "acc_stderr": 0.03225078108306289, "acc_norm": 0.7818181818181819, "acc_norm_stderr": 0.03225078108306289 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7929292929292929, "acc_stderr": 0.02886977846026705, "acc_norm": 0.7929292929292929, "acc_norm_stderr": 0.02886977846026705 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.9067357512953368, "acc_stderr": 0.020986854593289733, "acc_norm": 0.9067357512953368, "acc_norm_stderr": 0.020986854593289733 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6743589743589744, "acc_stderr": 0.02375966576741229, "acc_norm": 0.6743589743589744, "acc_norm_stderr": 0.02375966576741229 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.337037037037037, "acc_stderr": 0.02882088466625326, "acc_norm": 0.337037037037037, "acc_norm_stderr": 0.02882088466625326 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.7016806722689075, "acc_stderr": 0.029719142876342856, "acc_norm": 0.7016806722689075, "acc_norm_stderr": 0.029719142876342856 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.32450331125827814, "acc_stderr": 0.038227469376587525, "acc_norm": 0.32450331125827814, "acc_norm_stderr": 0.038227469376587525 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8605504587155963, "acc_stderr": 0.014852421490033053, "acc_norm": 0.8605504587155963, "acc_norm_stderr": 0.014852421490033053 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5277777777777778, "acc_stderr": 0.0340470532865388, "acc_norm": 0.5277777777777778, "acc_norm_stderr": 0.0340470532865388 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8235294117647058, "acc_stderr": 0.026756401538078966, "acc_norm": 0.8235294117647058, "acc_norm_stderr": 0.026756401538078966 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.8059071729957806, "acc_stderr": 0.025744902532290913, "acc_norm": 0.8059071729957806, "acc_norm_stderr": 0.025744902532290913 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6995515695067265, "acc_stderr": 0.03076935200822914, "acc_norm": 0.6995515695067265, "acc_norm_stderr": 0.03076935200822914 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7633587786259542, "acc_stderr": 0.03727673575596913, "acc_norm": 0.7633587786259542, "acc_norm_stderr": 0.03727673575596913 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7851239669421488, "acc_stderr": 0.037494924487096966, "acc_norm": 0.7851239669421488, "acc_norm_stderr": 0.037494924487096966 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.8148148148148148, "acc_stderr": 0.03755265865037181, "acc_norm": 0.8148148148148148, "acc_norm_stderr": 0.03755265865037181 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7607361963190185, "acc_stderr": 0.033519538795212696, "acc_norm": 0.7607361963190185, "acc_norm_stderr": 0.033519538795212696 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.4732142857142857, "acc_stderr": 0.047389751192741546, "acc_norm": 0.4732142857142857, "acc_norm_stderr": 0.047389751192741546 }, "harness|hendrycksTest-management|5": { "acc": 0.7766990291262136, "acc_stderr": 0.04123553189891431, "acc_norm": 0.7766990291262136, "acc_norm_stderr": 0.04123553189891431 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8760683760683761, "acc_stderr": 0.021586494001281376, "acc_norm": 0.8760683760683761, "acc_norm_stderr": 0.021586494001281376 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.71, "acc_stderr": 0.045604802157206845, "acc_norm": 0.71, "acc_norm_stderr": 0.045604802157206845 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8403575989782887, "acc_stderr": 0.013097934513263007, "acc_norm": 0.8403575989782887, "acc_norm_stderr": 0.013097934513263007 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7283236994219653, "acc_stderr": 0.023948512905468365, "acc_norm": 0.7283236994219653, "acc_norm_stderr": 0.023948512905468365 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.4670391061452514, "acc_stderr": 0.016686126653013934, "acc_norm": 0.4670391061452514, "acc_norm_stderr": 0.016686126653013934 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7156862745098039, "acc_stderr": 0.025829163272757482, "acc_norm": 0.7156862745098039, "acc_norm_stderr": 0.025829163272757482 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7009646302250804, "acc_stderr": 0.026003301117885142, "acc_norm": 0.7009646302250804, "acc_norm_stderr": 0.026003301117885142 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7376543209876543, "acc_stderr": 0.024477222856135107, "acc_norm": 0.7376543209876543, "acc_norm_stderr": 0.024477222856135107 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.4858156028368794, "acc_stderr": 0.02981549448368206, "acc_norm": 0.4858156028368794, "acc_norm_stderr": 0.02981549448368206 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.45827900912646674, "acc_stderr": 0.01272570165695364, "acc_norm": 0.45827900912646674, "acc_norm_stderr": 0.01272570165695364 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6838235294117647, "acc_stderr": 0.028245687391462937, "acc_norm": 0.6838235294117647, "acc_norm_stderr": 0.028245687391462937 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6666666666666666, "acc_stderr": 0.019070985589687495, "acc_norm": 0.6666666666666666, "acc_norm_stderr": 0.019070985589687495 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6909090909090909, "acc_stderr": 0.044262946482000985, "acc_norm": 0.6909090909090909, "acc_norm_stderr": 0.044262946482000985 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7306122448979592, "acc_stderr": 0.02840125202902294, "acc_norm": 0.7306122448979592, "acc_norm_stderr": 0.02840125202902294 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8706467661691543, "acc_stderr": 0.02372983088101853, "acc_norm": 0.8706467661691543, "acc_norm_stderr": 0.02372983088101853 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.84, "acc_stderr": 0.03684529491774709, "acc_norm": 0.84, "acc_norm_stderr": 0.03684529491774709 }, "harness|hendrycksTest-virology|5": { "acc": 0.5240963855421686, "acc_stderr": 0.03887971849597264, "acc_norm": 0.5240963855421686, "acc_norm_stderr": 0.03887971849597264 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8362573099415205, "acc_stderr": 0.028380919596145866, "acc_norm": 0.8362573099415205, "acc_norm_stderr": 0.028380919596145866 }, "harness|truthfulqa:mc|0": { "mc1": 0.4589963280293758, "mc1_stderr": 0.017444544447661192, "mc2": 0.6249833293230833, "mc2_stderr": 0.015381372353250218 }, "harness|winogrande|5": { "acc": 0.7987371744277821, "acc_stderr": 0.011268519971577684 }, "harness|gsm8k|5": { "acc": 0.6846095526914329, "acc_stderr": 0.012799353675801825 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_SanjiWatsuki__Loyal-Macaroni-Maid-7B
[ "region:us" ]
2023-12-27T13:07:47+00:00
{"pretty_name": "Evaluation run of SanjiWatsuki/Loyal-Macaroni-Maid-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [SanjiWatsuki/Loyal-Macaroni-Maid-7B](https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_SanjiWatsuki__Loyal-Macaroni-Maid-7B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-27T13:05:27.918633](https://huggingface.co/datasets/open-llm-leaderboard/details_SanjiWatsuki__Loyal-Macaroni-Maid-7B/blob/main/results_2023-12-27T13-05-27.918633.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6523016655618005,\n \"acc_stderr\": 0.03208657966943031,\n \"acc_norm\": 0.6528749403062928,\n \"acc_norm_stderr\": 0.03274010556971135,\n \"mc1\": 0.4589963280293758,\n \"mc1_stderr\": 0.017444544447661192,\n \"mc2\": 0.6249833293230833,\n \"mc2_stderr\": 0.015381372353250218\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6476109215017065,\n \"acc_stderr\": 0.013960142600598677,\n \"acc_norm\": 0.6800341296928327,\n \"acc_norm_stderr\": 0.013631345807016195\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6837283409679347,\n \"acc_stderr\": 0.004640699483543313,\n \"acc_norm\": 0.8638717386974706,\n \"acc_norm_stderr\": 0.003422238702226356\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6592592592592592,\n \"acc_stderr\": 0.04094376269996792,\n \"acc_norm\": 0.6592592592592592,\n \"acc_norm_stderr\": 0.04094376269996792\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6842105263157895,\n \"acc_stderr\": 0.0378272898086547,\n \"acc_norm\": 0.6842105263157895,\n \"acc_norm_stderr\": 0.0378272898086547\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.690566037735849,\n \"acc_stderr\": 0.02845015479411864,\n \"acc_norm\": 0.690566037735849,\n \"acc_norm_stderr\": 0.02845015479411864\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.03621034121889507,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.03621034121889507\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.653179190751445,\n \"acc_stderr\": 0.036291466701596636,\n \"acc_norm\": 0.653179190751445,\n \"acc_norm_stderr\": 0.036291466701596636\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4411764705882353,\n \"acc_stderr\": 0.049406356306056595,\n \"acc_norm\": 0.4411764705882353,\n \"acc_norm_stderr\": 0.049406356306056595\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.77,\n \"acc_stderr\": 0.04229525846816506,\n \"acc_norm\": 0.77,\n \"acc_norm_stderr\": 0.04229525846816506\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.6085106382978723,\n \"acc_stderr\": 0.03190701242326812,\n \"acc_norm\": 0.6085106382978723,\n \"acc_norm_stderr\": 0.03190701242326812\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5263157894736842,\n \"acc_stderr\": 0.046970851366478626,\n \"acc_norm\": 0.5263157894736842,\n \"acc_norm_stderr\": 0.046970851366478626\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5724137931034483,\n \"acc_stderr\": 0.04122737111370333,\n \"acc_norm\": 0.5724137931034483,\n \"acc_norm_stderr\": 0.04122737111370333\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.41005291005291006,\n \"acc_stderr\": 0.02533120243894443,\n \"acc_norm\": 0.41005291005291006,\n \"acc_norm_stderr\": 0.02533120243894443\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4603174603174603,\n \"acc_stderr\": 0.04458029125470973,\n \"acc_norm\": 0.4603174603174603,\n \"acc_norm_stderr\": 0.04458029125470973\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7838709677419354,\n \"acc_stderr\": 0.02341529343356852,\n \"acc_norm\": 0.7838709677419354,\n \"acc_norm_stderr\": 0.02341529343356852\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5123152709359606,\n \"acc_stderr\": 0.035169204442208966,\n \"acc_norm\": 0.5123152709359606,\n \"acc_norm_stderr\": 0.035169204442208966\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7818181818181819,\n \"acc_stderr\": 0.03225078108306289,\n \"acc_norm\": 0.7818181818181819,\n \"acc_norm_stderr\": 0.03225078108306289\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7929292929292929,\n \"acc_stderr\": 0.02886977846026705,\n \"acc_norm\": 0.7929292929292929,\n \"acc_norm_stderr\": 0.02886977846026705\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9067357512953368,\n \"acc_stderr\": 0.020986854593289733,\n \"acc_norm\": 0.9067357512953368,\n \"acc_norm_stderr\": 0.020986854593289733\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6743589743589744,\n \"acc_stderr\": 0.02375966576741229,\n \"acc_norm\": 0.6743589743589744,\n \"acc_norm_stderr\": 0.02375966576741229\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.337037037037037,\n \"acc_stderr\": 0.02882088466625326,\n \"acc_norm\": 0.337037037037037,\n \"acc_norm_stderr\": 0.02882088466625326\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.7016806722689075,\n \"acc_stderr\": 0.029719142876342856,\n \"acc_norm\": 0.7016806722689075,\n \"acc_norm_stderr\": 0.029719142876342856\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.32450331125827814,\n \"acc_stderr\": 0.038227469376587525,\n \"acc_norm\": 0.32450331125827814,\n \"acc_norm_stderr\": 0.038227469376587525\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8605504587155963,\n \"acc_stderr\": 0.014852421490033053,\n \"acc_norm\": 0.8605504587155963,\n \"acc_norm_stderr\": 0.014852421490033053\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5277777777777778,\n \"acc_stderr\": 0.0340470532865388,\n \"acc_norm\": 0.5277777777777778,\n \"acc_norm_stderr\": 0.0340470532865388\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8235294117647058,\n \"acc_stderr\": 0.026756401538078966,\n \"acc_norm\": 0.8235294117647058,\n \"acc_norm_stderr\": 0.026756401538078966\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8059071729957806,\n \"acc_stderr\": 0.025744902532290913,\n \"acc_norm\": 0.8059071729957806,\n \"acc_norm_stderr\": 0.025744902532290913\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6995515695067265,\n \"acc_stderr\": 0.03076935200822914,\n \"acc_norm\": 0.6995515695067265,\n \"acc_norm_stderr\": 0.03076935200822914\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7633587786259542,\n \"acc_stderr\": 0.03727673575596913,\n \"acc_norm\": 0.7633587786259542,\n \"acc_norm_stderr\": 0.03727673575596913\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7851239669421488,\n \"acc_stderr\": 0.037494924487096966,\n \"acc_norm\": 0.7851239669421488,\n \"acc_norm_stderr\": 0.037494924487096966\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8148148148148148,\n \"acc_stderr\": 0.03755265865037181,\n \"acc_norm\": 0.8148148148148148,\n \"acc_norm_stderr\": 0.03755265865037181\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7607361963190185,\n \"acc_stderr\": 0.033519538795212696,\n \"acc_norm\": 0.7607361963190185,\n \"acc_norm_stderr\": 0.033519538795212696\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4732142857142857,\n \"acc_stderr\": 0.047389751192741546,\n \"acc_norm\": 0.4732142857142857,\n \"acc_norm_stderr\": 0.047389751192741546\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7766990291262136,\n \"acc_stderr\": 0.04123553189891431,\n \"acc_norm\": 0.7766990291262136,\n \"acc_norm_stderr\": 0.04123553189891431\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8760683760683761,\n \"acc_stderr\": 0.021586494001281376,\n \"acc_norm\": 0.8760683760683761,\n \"acc_norm_stderr\": 0.021586494001281376\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8403575989782887,\n \"acc_stderr\": 0.013097934513263007,\n \"acc_norm\": 0.8403575989782887,\n \"acc_norm_stderr\": 0.013097934513263007\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7283236994219653,\n \"acc_stderr\": 0.023948512905468365,\n \"acc_norm\": 0.7283236994219653,\n \"acc_norm_stderr\": 0.023948512905468365\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4670391061452514,\n \"acc_stderr\": 0.016686126653013934,\n \"acc_norm\": 0.4670391061452514,\n \"acc_norm_stderr\": 0.016686126653013934\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7156862745098039,\n \"acc_stderr\": 0.025829163272757482,\n \"acc_norm\": 0.7156862745098039,\n \"acc_norm_stderr\": 0.025829163272757482\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7009646302250804,\n \"acc_stderr\": 0.026003301117885142,\n \"acc_norm\": 0.7009646302250804,\n \"acc_norm_stderr\": 0.026003301117885142\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7376543209876543,\n \"acc_stderr\": 0.024477222856135107,\n \"acc_norm\": 0.7376543209876543,\n \"acc_norm_stderr\": 0.024477222856135107\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4858156028368794,\n \"acc_stderr\": 0.02981549448368206,\n \"acc_norm\": 0.4858156028368794,\n \"acc_norm_stderr\": 0.02981549448368206\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.45827900912646674,\n \"acc_stderr\": 0.01272570165695364,\n \"acc_norm\": 0.45827900912646674,\n \"acc_norm_stderr\": 0.01272570165695364\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6838235294117647,\n \"acc_stderr\": 0.028245687391462937,\n \"acc_norm\": 0.6838235294117647,\n \"acc_norm_stderr\": 0.028245687391462937\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.019070985589687495,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.019070985589687495\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6909090909090909,\n \"acc_stderr\": 0.044262946482000985,\n \"acc_norm\": 0.6909090909090909,\n \"acc_norm_stderr\": 0.044262946482000985\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7306122448979592,\n \"acc_stderr\": 0.02840125202902294,\n \"acc_norm\": 0.7306122448979592,\n \"acc_norm_stderr\": 0.02840125202902294\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8706467661691543,\n \"acc_stderr\": 0.02372983088101853,\n \"acc_norm\": 0.8706467661691543,\n \"acc_norm_stderr\": 0.02372983088101853\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774709,\n \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.03684529491774709\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5240963855421686,\n \"acc_stderr\": 0.03887971849597264,\n \"acc_norm\": 0.5240963855421686,\n \"acc_norm_stderr\": 0.03887971849597264\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4589963280293758,\n \"mc1_stderr\": 0.017444544447661192,\n \"mc2\": 0.6249833293230833,\n \"mc2_stderr\": 0.015381372353250218\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7987371744277821,\n \"acc_stderr\": 0.011268519971577684\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6846095526914329,\n \"acc_stderr\": 0.012799353675801825\n }\n}\n```", "repo_url": "https://huggingface.co/SanjiWatsuki/Loyal-Macaroni-Maid-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|arc:challenge|25_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|gsm8k|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hellaswag|10_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-27T13-05-27.918633.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["**/details_harness|winogrande|5_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-27T13-05-27.918633.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_27T13_05_27.918633", "path": ["results_2023-12-27T13-05-27.918633.parquet"]}, {"split": "latest", "path": ["results_2023-12-27T13-05-27.918633.parquet"]}]}]}
2023-12-27T13:08:09+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of SanjiWatsuki/Loyal-Macaroni-Maid-7B Dataset automatically created during the evaluation run of model SanjiWatsuki/Loyal-Macaroni-Maid-7B on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-27T13:05:27.918633(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of SanjiWatsuki/Loyal-Macaroni-Maid-7B\n\n\n\nDataset automatically created during the evaluation run of model SanjiWatsuki/Loyal-Macaroni-Maid-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-27T13:05:27.918633(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of SanjiWatsuki/Loyal-Macaroni-Maid-7B\n\n\n\nDataset automatically created during the evaluation run of model SanjiWatsuki/Loyal-Macaroni-Maid-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-27T13:05:27.918633(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 195, 67, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of SanjiWatsuki/Loyal-Macaroni-Maid-7B\n\n\n\nDataset automatically created during the evaluation run of model SanjiWatsuki/Loyal-Macaroni-Maid-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-27T13:05:27.918633(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]" ]
e5610811761fed6e6fde27f607b7391eecef249d
# Dataset Card for Evaluation run of SanjiWatsuki/neural-chat-7b-v3-3-wizardmath-dare-me <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [SanjiWatsuki/neural-chat-7b-v3-3-wizardmath-dare-me](https://huggingface.co/SanjiWatsuki/neural-chat-7b-v3-3-wizardmath-dare-me) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_SanjiWatsuki__neural-chat-7b-v3-3-wizardmath-dare-me", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-27T13:07:52.569856](https://huggingface.co/datasets/open-llm-leaderboard/details_SanjiWatsuki__neural-chat-7b-v3-3-wizardmath-dare-me/blob/main/results_2023-12-27T13-07-52.569856.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.5841627607476706, "acc_stderr": 0.03324720041632836, "acc_norm": 0.5857534686922531, "acc_norm_stderr": 0.03390900076414602, "mc1": 0.4504283965728274, "mc1_stderr": 0.017417264371967646, "mc2": 0.6260346176960404, "mc2_stderr": 0.01578574508599339 }, "harness|arc:challenge|25": { "acc": 0.5588737201365188, "acc_stderr": 0.014509747749064664, "acc_norm": 0.5964163822525598, "acc_norm_stderr": 0.014337158914268447 }, "harness|hellaswag|10": { "acc": 0.6515634335789683, "acc_stderr": 0.004755013243022125, "acc_norm": 0.8263294164509062, "acc_norm_stderr": 0.0037805175193024827 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.22, "acc_stderr": 0.041633319989322695, "acc_norm": 0.22, "acc_norm_stderr": 0.041633319989322695 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.562962962962963, "acc_stderr": 0.04284958639753401, "acc_norm": 0.562962962962963, "acc_norm_stderr": 0.04284958639753401 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6578947368421053, "acc_stderr": 0.03860731599316091, "acc_norm": 0.6578947368421053, "acc_norm_stderr": 0.03860731599316091 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.6, "acc_stderr": 0.049236596391733084, "acc_norm": 0.6, "acc_norm_stderr": 0.049236596391733084 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6339622641509434, "acc_stderr": 0.029647813539365245, "acc_norm": 0.6339622641509434, "acc_norm_stderr": 0.029647813539365245 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.6527777777777778, "acc_stderr": 0.039812405437178615, "acc_norm": 0.6527777777777778, "acc_norm_stderr": 0.039812405437178615 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.43, "acc_stderr": 0.04975698519562427, "acc_norm": 0.43, "acc_norm_stderr": 0.04975698519562427 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.47, "acc_stderr": 0.050161355804659205, "acc_norm": 0.47, "acc_norm_stderr": 0.050161355804659205 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.33, "acc_stderr": 0.04725815626252604, "acc_norm": 0.33, "acc_norm_stderr": 0.04725815626252604 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6011560693641619, "acc_stderr": 0.037336266553835096, "acc_norm": 0.6011560693641619, "acc_norm_stderr": 0.037336266553835096 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.37254901960784315, "acc_stderr": 0.04810840148082635, "acc_norm": 0.37254901960784315, "acc_norm_stderr": 0.04810840148082635 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.7, "acc_stderr": 0.046056618647183814, "acc_norm": 0.7, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.4978723404255319, "acc_stderr": 0.03268572658667492, "acc_norm": 0.4978723404255319, "acc_norm_stderr": 0.03268572658667492 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.35964912280701755, "acc_stderr": 0.04514496132873633, "acc_norm": 0.35964912280701755, "acc_norm_stderr": 0.04514496132873633 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.4827586206896552, "acc_stderr": 0.04164188720169377, "acc_norm": 0.4827586206896552, "acc_norm_stderr": 0.04164188720169377 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.3386243386243386, "acc_stderr": 0.02437319786798306, "acc_norm": 0.3386243386243386, "acc_norm_stderr": 0.02437319786798306 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.3333333333333333, "acc_stderr": 0.04216370213557835, "acc_norm": 0.3333333333333333, "acc_norm_stderr": 0.04216370213557835 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.28, "acc_stderr": 0.04512608598542127, "acc_norm": 0.28, "acc_norm_stderr": 0.04512608598542127 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.6870967741935484, "acc_stderr": 0.02637756702864586, "acc_norm": 0.6870967741935484, "acc_norm_stderr": 0.02637756702864586 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.39408866995073893, "acc_stderr": 0.03438157967036545, "acc_norm": 0.39408866995073893, "acc_norm_stderr": 0.03438157967036545 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.66, "acc_stderr": 0.04760952285695237, "acc_norm": 0.66, "acc_norm_stderr": 0.04760952285695237 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7151515151515152, "acc_stderr": 0.03524390844511781, "acc_norm": 0.7151515151515152, "acc_norm_stderr": 0.03524390844511781 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7222222222222222, "acc_stderr": 0.03191178226713548, "acc_norm": 0.7222222222222222, "acc_norm_stderr": 0.03191178226713548 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8704663212435233, "acc_stderr": 0.024233532297758733, "acc_norm": 0.8704663212435233, "acc_norm_stderr": 0.024233532297758733 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.5615384615384615, "acc_stderr": 0.025158266016868578, "acc_norm": 0.5615384615384615, "acc_norm_stderr": 0.025158266016868578 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.28888888888888886, "acc_stderr": 0.027634907264178544, "acc_norm": 0.28888888888888886, "acc_norm_stderr": 0.027634907264178544 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.5966386554621849, "acc_stderr": 0.031866081214088314, "acc_norm": 0.5966386554621849, "acc_norm_stderr": 0.031866081214088314 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.33112582781456956, "acc_stderr": 0.038425817186598696, "acc_norm": 0.33112582781456956, "acc_norm_stderr": 0.038425817186598696 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8018348623853211, "acc_stderr": 0.01709057380421791, "acc_norm": 0.8018348623853211, "acc_norm_stderr": 0.01709057380421791 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.4166666666666667, "acc_stderr": 0.03362277436608044, "acc_norm": 0.4166666666666667, "acc_norm_stderr": 0.03362277436608044 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7696078431372549, "acc_stderr": 0.029554292605695066, "acc_norm": 0.7696078431372549, "acc_norm_stderr": 0.029554292605695066 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7721518987341772, "acc_stderr": 0.027303484599069432, "acc_norm": 0.7721518987341772, "acc_norm_stderr": 0.027303484599069432 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6412556053811659, "acc_stderr": 0.03219079200419995, "acc_norm": 0.6412556053811659, "acc_norm_stderr": 0.03219079200419995 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.6717557251908397, "acc_stderr": 0.04118438565806298, "acc_norm": 0.6717557251908397, "acc_norm_stderr": 0.04118438565806298 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7272727272727273, "acc_stderr": 0.04065578140908705, "acc_norm": 0.7272727272727273, "acc_norm_stderr": 0.04065578140908705 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.6851851851851852, "acc_stderr": 0.04489931073591312, "acc_norm": 0.6851851851851852, "acc_norm_stderr": 0.04489931073591312 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.6748466257668712, "acc_stderr": 0.03680350371286461, "acc_norm": 0.6748466257668712, "acc_norm_stderr": 0.03680350371286461 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.48214285714285715, "acc_stderr": 0.047427623612430116, "acc_norm": 0.48214285714285715, "acc_norm_stderr": 0.047427623612430116 }, "harness|hendrycksTest-management|5": { "acc": 0.7378640776699029, "acc_stderr": 0.043546310772605956, "acc_norm": 0.7378640776699029, "acc_norm_stderr": 0.043546310772605956 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8461538461538461, "acc_stderr": 0.02363687331748928, "acc_norm": 0.8461538461538461, "acc_norm_stderr": 0.02363687331748928 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.67, "acc_stderr": 0.04725815626252609, "acc_norm": 0.67, "acc_norm_stderr": 0.04725815626252609 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.7854406130268199, "acc_stderr": 0.014680033956893346, "acc_norm": 0.7854406130268199, "acc_norm_stderr": 0.014680033956893346 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.6676300578034682, "acc_stderr": 0.02536116874968822, "acc_norm": 0.6676300578034682, "acc_norm_stderr": 0.02536116874968822 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.2212290502793296, "acc_stderr": 0.013882164598887277, "acc_norm": 0.2212290502793296, "acc_norm_stderr": 0.013882164598887277 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.6437908496732027, "acc_stderr": 0.02742047766262924, "acc_norm": 0.6437908496732027, "acc_norm_stderr": 0.02742047766262924 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.6655948553054662, "acc_stderr": 0.026795422327893937, "acc_norm": 0.6655948553054662, "acc_norm_stderr": 0.026795422327893937 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.6882716049382716, "acc_stderr": 0.02577311116963045, "acc_norm": 0.6882716049382716, "acc_norm_stderr": 0.02577311116963045 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.4326241134751773, "acc_stderr": 0.029555454236778852, "acc_norm": 0.4326241134751773, "acc_norm_stderr": 0.029555454236778852 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.40352020860495436, "acc_stderr": 0.012530241301193184, "acc_norm": 0.40352020860495436, "acc_norm_stderr": 0.012530241301193184 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.625, "acc_stderr": 0.029408372932278746, "acc_norm": 0.625, "acc_norm_stderr": 0.029408372932278746 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6062091503267973, "acc_stderr": 0.019766211991073066, "acc_norm": 0.6062091503267973, "acc_norm_stderr": 0.019766211991073066 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6454545454545455, "acc_stderr": 0.045820048415054174, "acc_norm": 0.6454545454545455, "acc_norm_stderr": 0.045820048415054174 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.6979591836734694, "acc_stderr": 0.0293936093198798, "acc_norm": 0.6979591836734694, "acc_norm_stderr": 0.0293936093198798 }, "harness|hendrycksTest-sociology|5": { "acc": 0.7313432835820896, "acc_stderr": 0.03134328358208954, "acc_norm": 0.7313432835820896, "acc_norm_stderr": 0.03134328358208954 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.78, "acc_stderr": 0.04163331998932264, "acc_norm": 0.78, "acc_norm_stderr": 0.04163331998932264 }, "harness|hendrycksTest-virology|5": { "acc": 0.4939759036144578, "acc_stderr": 0.03892212195333045, "acc_norm": 0.4939759036144578, "acc_norm_stderr": 0.03892212195333045 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8011695906432749, "acc_stderr": 0.030611116557432528, "acc_norm": 0.8011695906432749, "acc_norm_stderr": 0.030611116557432528 }, "harness|truthfulqa:mc|0": { "mc1": 0.4504283965728274, "mc1_stderr": 0.017417264371967646, "mc2": 0.6260346176960404, "mc2_stderr": 0.01578574508599339 }, "harness|winogrande|5": { "acc": 0.7166535122336227, "acc_stderr": 0.012664751735505323 }, "harness|gsm8k|5": { "acc": 0.5701288855193328, "acc_stderr": 0.013636344017393732 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_SanjiWatsuki__neural-chat-7b-v3-3-wizardmath-dare-me
[ "region:us" ]
2023-12-27T13:10:06+00:00
{"pretty_name": "Evaluation run of SanjiWatsuki/neural-chat-7b-v3-3-wizardmath-dare-me", "dataset_summary": "Dataset automatically created during the evaluation run of model [SanjiWatsuki/neural-chat-7b-v3-3-wizardmath-dare-me](https://huggingface.co/SanjiWatsuki/neural-chat-7b-v3-3-wizardmath-dare-me) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_SanjiWatsuki__neural-chat-7b-v3-3-wizardmath-dare-me\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-27T13:07:52.569856](https://huggingface.co/datasets/open-llm-leaderboard/details_SanjiWatsuki__neural-chat-7b-v3-3-wizardmath-dare-me/blob/main/results_2023-12-27T13-07-52.569856.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5841627607476706,\n \"acc_stderr\": 0.03324720041632836,\n \"acc_norm\": 0.5857534686922531,\n \"acc_norm_stderr\": 0.03390900076414602,\n \"mc1\": 0.4504283965728274,\n \"mc1_stderr\": 0.017417264371967646,\n \"mc2\": 0.6260346176960404,\n \"mc2_stderr\": 0.01578574508599339\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5588737201365188,\n \"acc_stderr\": 0.014509747749064664,\n \"acc_norm\": 0.5964163822525598,\n \"acc_norm_stderr\": 0.014337158914268447\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6515634335789683,\n \"acc_stderr\": 0.004755013243022125,\n \"acc_norm\": 0.8263294164509062,\n \"acc_norm_stderr\": 0.0037805175193024827\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.22,\n \"acc_stderr\": 0.041633319989322695,\n \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.041633319989322695\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.562962962962963,\n \"acc_stderr\": 0.04284958639753401,\n \"acc_norm\": 0.562962962962963,\n \"acc_norm_stderr\": 0.04284958639753401\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6578947368421053,\n \"acc_stderr\": 0.03860731599316091,\n \"acc_norm\": 0.6578947368421053,\n \"acc_norm_stderr\": 0.03860731599316091\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.049236596391733084,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.049236596391733084\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6339622641509434,\n \"acc_stderr\": 0.029647813539365245,\n \"acc_norm\": 0.6339622641509434,\n \"acc_norm_stderr\": 0.029647813539365245\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6527777777777778,\n \"acc_stderr\": 0.039812405437178615,\n \"acc_norm\": 0.6527777777777778,\n \"acc_norm_stderr\": 0.039812405437178615\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.43,\n \"acc_stderr\": 0.04975698519562427,\n \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.04975698519562427\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252604,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252604\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6011560693641619,\n \"acc_stderr\": 0.037336266553835096,\n \"acc_norm\": 0.6011560693641619,\n \"acc_norm_stderr\": 0.037336266553835096\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.37254901960784315,\n \"acc_stderr\": 0.04810840148082635,\n \"acc_norm\": 0.37254901960784315,\n \"acc_norm_stderr\": 0.04810840148082635\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.4978723404255319,\n \"acc_stderr\": 0.03268572658667492,\n \"acc_norm\": 0.4978723404255319,\n \"acc_norm_stderr\": 0.03268572658667492\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.35964912280701755,\n \"acc_stderr\": 0.04514496132873633,\n \"acc_norm\": 0.35964912280701755,\n \"acc_norm_stderr\": 0.04514496132873633\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.4827586206896552,\n \"acc_stderr\": 0.04164188720169377,\n \"acc_norm\": 0.4827586206896552,\n \"acc_norm_stderr\": 0.04164188720169377\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3386243386243386,\n \"acc_stderr\": 0.02437319786798306,\n \"acc_norm\": 0.3386243386243386,\n \"acc_norm_stderr\": 0.02437319786798306\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3333333333333333,\n \"acc_stderr\": 0.04216370213557835,\n \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.04216370213557835\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542127,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542127\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6870967741935484,\n \"acc_stderr\": 0.02637756702864586,\n \"acc_norm\": 0.6870967741935484,\n \"acc_norm_stderr\": 0.02637756702864586\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.39408866995073893,\n \"acc_stderr\": 0.03438157967036545,\n \"acc_norm\": 0.39408866995073893,\n \"acc_norm_stderr\": 0.03438157967036545\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\": 0.66,\n \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7151515151515152,\n \"acc_stderr\": 0.03524390844511781,\n \"acc_norm\": 0.7151515151515152,\n \"acc_norm_stderr\": 0.03524390844511781\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7222222222222222,\n \"acc_stderr\": 0.03191178226713548,\n \"acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.03191178226713548\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8704663212435233,\n \"acc_stderr\": 0.024233532297758733,\n \"acc_norm\": 0.8704663212435233,\n \"acc_norm_stderr\": 0.024233532297758733\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.5615384615384615,\n \"acc_stderr\": 0.025158266016868578,\n \"acc_norm\": 0.5615384615384615,\n \"acc_norm_stderr\": 0.025158266016868578\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.28888888888888886,\n \"acc_stderr\": 0.027634907264178544,\n \"acc_norm\": 0.28888888888888886,\n \"acc_norm_stderr\": 0.027634907264178544\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.5966386554621849,\n \"acc_stderr\": 0.031866081214088314,\n \"acc_norm\": 0.5966386554621849,\n \"acc_norm_stderr\": 0.031866081214088314\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.33112582781456956,\n \"acc_stderr\": 0.038425817186598696,\n \"acc_norm\": 0.33112582781456956,\n \"acc_norm_stderr\": 0.038425817186598696\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8018348623853211,\n \"acc_stderr\": 0.01709057380421791,\n \"acc_norm\": 0.8018348623853211,\n \"acc_norm_stderr\": 0.01709057380421791\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4166666666666667,\n \"acc_stderr\": 0.03362277436608044,\n \"acc_norm\": 0.4166666666666667,\n \"acc_norm_stderr\": 0.03362277436608044\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7696078431372549,\n \"acc_stderr\": 0.029554292605695066,\n \"acc_norm\": 0.7696078431372549,\n \"acc_norm_stderr\": 0.029554292605695066\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7721518987341772,\n \"acc_stderr\": 0.027303484599069432,\n \"acc_norm\": 0.7721518987341772,\n \"acc_norm_stderr\": 0.027303484599069432\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6412556053811659,\n \"acc_stderr\": 0.03219079200419995,\n \"acc_norm\": 0.6412556053811659,\n \"acc_norm_stderr\": 0.03219079200419995\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.6717557251908397,\n \"acc_stderr\": 0.04118438565806298,\n \"acc_norm\": 0.6717557251908397,\n \"acc_norm_stderr\": 0.04118438565806298\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7272727272727273,\n \"acc_stderr\": 0.04065578140908705,\n \"acc_norm\": 0.7272727272727273,\n \"acc_norm_stderr\": 0.04065578140908705\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.6851851851851852,\n \"acc_stderr\": 0.04489931073591312,\n \"acc_norm\": 0.6851851851851852,\n \"acc_norm_stderr\": 0.04489931073591312\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.6748466257668712,\n \"acc_stderr\": 0.03680350371286461,\n \"acc_norm\": 0.6748466257668712,\n \"acc_norm_stderr\": 0.03680350371286461\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.48214285714285715,\n \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.48214285714285715,\n \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7378640776699029,\n \"acc_stderr\": 0.043546310772605956,\n \"acc_norm\": 0.7378640776699029,\n \"acc_norm_stderr\": 0.043546310772605956\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8461538461538461,\n \"acc_stderr\": 0.02363687331748928,\n \"acc_norm\": 0.8461538461538461,\n \"acc_norm_stderr\": 0.02363687331748928\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.67,\n \"acc_stderr\": 0.04725815626252609,\n \"acc_norm\": 0.67,\n \"acc_norm_stderr\": 0.04725815626252609\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7854406130268199,\n \"acc_stderr\": 0.014680033956893346,\n \"acc_norm\": 0.7854406130268199,\n \"acc_norm_stderr\": 0.014680033956893346\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6676300578034682,\n \"acc_stderr\": 0.02536116874968822,\n \"acc_norm\": 0.6676300578034682,\n \"acc_norm_stderr\": 0.02536116874968822\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2212290502793296,\n \"acc_stderr\": 0.013882164598887277,\n \"acc_norm\": 0.2212290502793296,\n \"acc_norm_stderr\": 0.013882164598887277\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6437908496732027,\n \"acc_stderr\": 0.02742047766262924,\n \"acc_norm\": 0.6437908496732027,\n \"acc_norm_stderr\": 0.02742047766262924\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6655948553054662,\n \"acc_stderr\": 0.026795422327893937,\n \"acc_norm\": 0.6655948553054662,\n \"acc_norm_stderr\": 0.026795422327893937\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6882716049382716,\n \"acc_stderr\": 0.02577311116963045,\n \"acc_norm\": 0.6882716049382716,\n \"acc_norm_stderr\": 0.02577311116963045\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4326241134751773,\n \"acc_stderr\": 0.029555454236778852,\n \"acc_norm\": 0.4326241134751773,\n \"acc_norm_stderr\": 0.029555454236778852\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.40352020860495436,\n \"acc_stderr\": 0.012530241301193184,\n \"acc_norm\": 0.40352020860495436,\n \"acc_norm_stderr\": 0.012530241301193184\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.625,\n \"acc_stderr\": 0.029408372932278746,\n \"acc_norm\": 0.625,\n \"acc_norm_stderr\": 0.029408372932278746\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6062091503267973,\n \"acc_stderr\": 0.019766211991073066,\n \"acc_norm\": 0.6062091503267973,\n \"acc_norm_stderr\": 0.019766211991073066\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6454545454545455,\n \"acc_stderr\": 0.045820048415054174,\n \"acc_norm\": 0.6454545454545455,\n \"acc_norm_stderr\": 0.045820048415054174\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6979591836734694,\n \"acc_stderr\": 0.0293936093198798,\n \"acc_norm\": 0.6979591836734694,\n \"acc_norm_stderr\": 0.0293936093198798\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7313432835820896,\n \"acc_stderr\": 0.03134328358208954,\n \"acc_norm\": 0.7313432835820896,\n \"acc_norm_stderr\": 0.03134328358208954\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.78,\n \"acc_stderr\": 0.04163331998932264,\n \"acc_norm\": 0.78,\n \"acc_norm_stderr\": 0.04163331998932264\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4939759036144578,\n \"acc_stderr\": 0.03892212195333045,\n \"acc_norm\": 0.4939759036144578,\n \"acc_norm_stderr\": 0.03892212195333045\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8011695906432749,\n \"acc_stderr\": 0.030611116557432528,\n \"acc_norm\": 0.8011695906432749,\n \"acc_norm_stderr\": 0.030611116557432528\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4504283965728274,\n \"mc1_stderr\": 0.017417264371967646,\n \"mc2\": 0.6260346176960404,\n \"mc2_stderr\": 0.01578574508599339\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7166535122336227,\n \"acc_stderr\": 0.012664751735505323\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5701288855193328,\n \"acc_stderr\": 0.013636344017393732\n }\n}\n```", "repo_url": "https://huggingface.co/SanjiWatsuki/neural-chat-7b-v3-3-wizardmath-dare-me", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|arc:challenge|25_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|gsm8k|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hellaswag|10_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-27T13-07-52.569856.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["**/details_harness|winogrande|5_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-27T13-07-52.569856.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_27T13_07_52.569856", "path": ["results_2023-12-27T13-07-52.569856.parquet"]}, {"split": "latest", "path": ["results_2023-12-27T13-07-52.569856.parquet"]}]}]}
2023-12-27T13:10:27+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of SanjiWatsuki/neural-chat-7b-v3-3-wizardmath-dare-me Dataset automatically created during the evaluation run of model SanjiWatsuki/neural-chat-7b-v3-3-wizardmath-dare-me on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-27T13:07:52.569856(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of SanjiWatsuki/neural-chat-7b-v3-3-wizardmath-dare-me\n\n\n\nDataset automatically created during the evaluation run of model SanjiWatsuki/neural-chat-7b-v3-3-wizardmath-dare-me on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-27T13:07:52.569856(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of SanjiWatsuki/neural-chat-7b-v3-3-wizardmath-dare-me\n\n\n\nDataset automatically created during the evaluation run of model SanjiWatsuki/neural-chat-7b-v3-3-wizardmath-dare-me on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-27T13:07:52.569856(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 207, 67, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of SanjiWatsuki/neural-chat-7b-v3-3-wizardmath-dare-me\n\n\n\nDataset automatically created during the evaluation run of model SanjiWatsuki/neural-chat-7b-v3-3-wizardmath-dare-me on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-27T13:07:52.569856(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]" ]
acfd9df9f396ab3b5dbb238dc8cbc13146c250da
# Dataset Card for Evaluation run of NExtNewChattingAI/shark_tank_ai_7b_v2 <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [NExtNewChattingAI/shark_tank_ai_7b_v2](https://huggingface.co/NExtNewChattingAI/shark_tank_ai_7b_v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_NExtNewChattingAI__shark_tank_ai_7b_v2", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-30T02:08:17.372463](https://huggingface.co/datasets/open-llm-leaderboard/details_NExtNewChattingAI__shark_tank_ai_7b_v2/blob/main/results_2023-12-30T02-08-17.372463.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.5913472814173071, "acc_stderr": 0.033338968813516716, "acc_norm": 0.5942120534203962, "acc_norm_stderr": 0.034013618318204875, "mc1": 0.47123623011015914, "mc1_stderr": 0.01747451384852552, "mc2": 0.6215003802373064, "mc2_stderr": 0.015827007610389632 }, "harness|arc:challenge|25": { "acc": 0.6476109215017065, "acc_stderr": 0.013960142600598682, "acc_norm": 0.6774744027303754, "acc_norm_stderr": 0.01365998089427737 }, "harness|hellaswag|10": { "acc": 0.678550089623581, "acc_stderr": 0.004660785616933753, "acc_norm": 0.8706432981477793, "acc_norm_stderr": 0.0033490845685472575 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.27, "acc_stderr": 0.0446196043338474, "acc_norm": 0.27, "acc_norm_stderr": 0.0446196043338474 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.5407407407407407, "acc_stderr": 0.04304979692464242, "acc_norm": 0.5407407407407407, "acc_norm_stderr": 0.04304979692464242 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6052631578947368, "acc_stderr": 0.039777499346220734, "acc_norm": 0.6052631578947368, "acc_norm_stderr": 0.039777499346220734 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.54, "acc_stderr": 0.05009082659620333, "acc_norm": 0.54, "acc_norm_stderr": 0.05009082659620333 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6679245283018868, "acc_stderr": 0.028985455652334388, "acc_norm": 0.6679245283018868, "acc_norm_stderr": 0.028985455652334388 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.6458333333333334, "acc_stderr": 0.039994111357535424, "acc_norm": 0.6458333333333334, "acc_norm_stderr": 0.039994111357535424 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.39, "acc_stderr": 0.04902071300001975, "acc_norm": 0.39, "acc_norm_stderr": 0.04902071300001975 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.44, "acc_stderr": 0.04988876515698589, "acc_norm": 0.44, "acc_norm_stderr": 0.04988876515698589 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.38, "acc_stderr": 0.04878317312145632, "acc_norm": 0.38, "acc_norm_stderr": 0.04878317312145632 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6011560693641619, "acc_stderr": 0.0373362665538351, "acc_norm": 0.6011560693641619, "acc_norm_stderr": 0.0373362665538351 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.4019607843137255, "acc_stderr": 0.048786087144669955, "acc_norm": 0.4019607843137255, "acc_norm_stderr": 0.048786087144669955 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.71, "acc_stderr": 0.04560480215720683, "acc_norm": 0.71, "acc_norm_stderr": 0.04560480215720683 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5234042553191489, "acc_stderr": 0.032650194750335815, "acc_norm": 0.5234042553191489, "acc_norm_stderr": 0.032650194750335815 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.38596491228070173, "acc_stderr": 0.04579639422070434, "acc_norm": 0.38596491228070173, "acc_norm_stderr": 0.04579639422070434 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.503448275862069, "acc_stderr": 0.04166567577101579, "acc_norm": 0.503448275862069, "acc_norm_stderr": 0.04166567577101579 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.3994708994708995, "acc_stderr": 0.025225450284067877, "acc_norm": 0.3994708994708995, "acc_norm_stderr": 0.025225450284067877 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.35714285714285715, "acc_stderr": 0.04285714285714281, "acc_norm": 0.35714285714285715, "acc_norm_stderr": 0.04285714285714281 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.37, "acc_stderr": 0.04852365870939099, "acc_norm": 0.37, "acc_norm_stderr": 0.04852365870939099 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7129032258064516, "acc_stderr": 0.025736542745594528, "acc_norm": 0.7129032258064516, "acc_norm_stderr": 0.025736542745594528 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.49261083743842365, "acc_stderr": 0.035176035403610084, "acc_norm": 0.49261083743842365, "acc_norm_stderr": 0.035176035403610084 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.62, "acc_stderr": 0.04878317312145633, "acc_norm": 0.62, "acc_norm_stderr": 0.04878317312145633 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7212121212121212, "acc_stderr": 0.03501438706296781, "acc_norm": 0.7212121212121212, "acc_norm_stderr": 0.03501438706296781 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7070707070707071, "acc_stderr": 0.03242497958178815, "acc_norm": 0.7070707070707071, "acc_norm_stderr": 0.03242497958178815 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.7979274611398963, "acc_stderr": 0.02897908979429673, "acc_norm": 0.7979274611398963, "acc_norm_stderr": 0.02897908979429673 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6128205128205129, "acc_stderr": 0.024697216930878937, "acc_norm": 0.6128205128205129, "acc_norm_stderr": 0.024697216930878937 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.3111111111111111, "acc_stderr": 0.028226446749683515, "acc_norm": 0.3111111111111111, "acc_norm_stderr": 0.028226446749683515 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.592436974789916, "acc_stderr": 0.03191863374478466, "acc_norm": 0.592436974789916, "acc_norm_stderr": 0.03191863374478466 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.33774834437086093, "acc_stderr": 0.0386155754625517, "acc_norm": 0.33774834437086093, "acc_norm_stderr": 0.0386155754625517 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8128440366972477, "acc_stderr": 0.016722684526200154, "acc_norm": 0.8128440366972477, "acc_norm_stderr": 0.016722684526200154 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.4722222222222222, "acc_stderr": 0.0340470532865388, "acc_norm": 0.4722222222222222, "acc_norm_stderr": 0.0340470532865388 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7941176470588235, "acc_stderr": 0.028379449451588667, "acc_norm": 0.7941176470588235, "acc_norm_stderr": 0.028379449451588667 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7848101265822784, "acc_stderr": 0.02675082699467618, "acc_norm": 0.7848101265822784, "acc_norm_stderr": 0.02675082699467618 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6457399103139013, "acc_stderr": 0.03210062154134987, "acc_norm": 0.6457399103139013, "acc_norm_stderr": 0.03210062154134987 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.6870229007633588, "acc_stderr": 0.04066962905677698, "acc_norm": 0.6870229007633588, "acc_norm_stderr": 0.04066962905677698 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7603305785123967, "acc_stderr": 0.03896878985070416, "acc_norm": 0.7603305785123967, "acc_norm_stderr": 0.03896878985070416 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7222222222222222, "acc_stderr": 0.04330043749650743, "acc_norm": 0.7222222222222222, "acc_norm_stderr": 0.04330043749650743 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.6625766871165644, "acc_stderr": 0.037149084099355745, "acc_norm": 0.6625766871165644, "acc_norm_stderr": 0.037149084099355745 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.5, "acc_stderr": 0.04745789978762494, "acc_norm": 0.5, "acc_norm_stderr": 0.04745789978762494 }, "harness|hendrycksTest-management|5": { "acc": 0.7572815533980582, "acc_stderr": 0.04245022486384495, "acc_norm": 0.7572815533980582, "acc_norm_stderr": 0.04245022486384495 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8547008547008547, "acc_stderr": 0.023086635086841403, "acc_norm": 0.8547008547008547, "acc_norm_stderr": 0.023086635086841403 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.6, "acc_stderr": 0.04923659639173309, "acc_norm": 0.6, "acc_norm_stderr": 0.04923659639173309 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.7662835249042146, "acc_stderr": 0.015133383278988829, "acc_norm": 0.7662835249042146, "acc_norm_stderr": 0.015133383278988829 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.6965317919075145, "acc_stderr": 0.02475241196091721, "acc_norm": 0.6965317919075145, "acc_norm_stderr": 0.02475241196091721 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.16983240223463686, "acc_stderr": 0.012558113565152457, "acc_norm": 0.16983240223463686, "acc_norm_stderr": 0.012558113565152457 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.6601307189542484, "acc_stderr": 0.027121956071388856, "acc_norm": 0.6601307189542484, "acc_norm_stderr": 0.027121956071388856 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.6816720257234726, "acc_stderr": 0.026457225067811032, "acc_norm": 0.6816720257234726, "acc_norm_stderr": 0.026457225067811032 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.6790123456790124, "acc_stderr": 0.02597656601086274, "acc_norm": 0.6790123456790124, "acc_norm_stderr": 0.02597656601086274 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.4326241134751773, "acc_stderr": 0.029555454236778852, "acc_norm": 0.4326241134751773, "acc_norm_stderr": 0.029555454236778852 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.43285528031290743, "acc_stderr": 0.012654565234622864, "acc_norm": 0.43285528031290743, "acc_norm_stderr": 0.012654565234622864 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.5588235294117647, "acc_stderr": 0.030161911930767105, "acc_norm": 0.5588235294117647, "acc_norm_stderr": 0.030161911930767105 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.5833333333333334, "acc_stderr": 0.019944914136873583, "acc_norm": 0.5833333333333334, "acc_norm_stderr": 0.019944914136873583 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6, "acc_stderr": 0.0469237132203465, "acc_norm": 0.6, "acc_norm_stderr": 0.0469237132203465 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.6857142857142857, "acc_stderr": 0.029719329422417475, "acc_norm": 0.6857142857142857, "acc_norm_stderr": 0.029719329422417475 }, "harness|hendrycksTest-sociology|5": { "acc": 0.7810945273631841, "acc_stderr": 0.029239174636647, "acc_norm": 0.7810945273631841, "acc_norm_stderr": 0.029239174636647 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.86, "acc_stderr": 0.03487350880197769, "acc_norm": 0.86, "acc_norm_stderr": 0.03487350880197769 }, "harness|hendrycksTest-virology|5": { "acc": 0.4819277108433735, "acc_stderr": 0.038899512528272166, "acc_norm": 0.4819277108433735, "acc_norm_stderr": 0.038899512528272166 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.7485380116959064, "acc_stderr": 0.033275044238468436, "acc_norm": 0.7485380116959064, "acc_norm_stderr": 0.033275044238468436 }, "harness|truthfulqa:mc|0": { "mc1": 0.47123623011015914, "mc1_stderr": 0.01747451384852552, "mc2": 0.6215003802373064, "mc2_stderr": 0.015827007610389632 }, "harness|winogrande|5": { "acc": 0.7845303867403315, "acc_stderr": 0.011555295286059282 }, "harness|gsm8k|5": { "acc": 0.45109931766489764, "acc_stderr": 0.01370645880966482 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_NExtNewChattingAI__shark_tank_ai_7b_v2
[ "region:us" ]
2023-12-27T13:10:50+00:00
{"pretty_name": "Evaluation run of NExtNewChattingAI/shark_tank_ai_7b_v2", "dataset_summary": "Dataset automatically created during the evaluation run of model [NExtNewChattingAI/shark_tank_ai_7b_v2](https://huggingface.co/NExtNewChattingAI/shark_tank_ai_7b_v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_NExtNewChattingAI__shark_tank_ai_7b_v2\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-30T02:08:17.372463](https://huggingface.co/datasets/open-llm-leaderboard/details_NExtNewChattingAI__shark_tank_ai_7b_v2/blob/main/results_2023-12-30T02-08-17.372463.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5913472814173071,\n \"acc_stderr\": 0.033338968813516716,\n \"acc_norm\": 0.5942120534203962,\n \"acc_norm_stderr\": 0.034013618318204875,\n \"mc1\": 0.47123623011015914,\n \"mc1_stderr\": 0.01747451384852552,\n \"mc2\": 0.6215003802373064,\n \"mc2_stderr\": 0.015827007610389632\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6476109215017065,\n \"acc_stderr\": 0.013960142600598682,\n \"acc_norm\": 0.6774744027303754,\n \"acc_norm_stderr\": 0.01365998089427737\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.678550089623581,\n \"acc_stderr\": 0.004660785616933753,\n \"acc_norm\": 0.8706432981477793,\n \"acc_norm_stderr\": 0.0033490845685472575\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.0446196043338474,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.0446196043338474\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5407407407407407,\n \"acc_stderr\": 0.04304979692464242,\n \"acc_norm\": 0.5407407407407407,\n \"acc_norm_stderr\": 0.04304979692464242\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6052631578947368,\n \"acc_stderr\": 0.039777499346220734,\n \"acc_norm\": 0.6052631578947368,\n \"acc_norm_stderr\": 0.039777499346220734\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6679245283018868,\n \"acc_stderr\": 0.028985455652334388,\n \"acc_norm\": 0.6679245283018868,\n \"acc_norm_stderr\": 0.028985455652334388\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6458333333333334,\n \"acc_stderr\": 0.039994111357535424,\n \"acc_norm\": 0.6458333333333334,\n \"acc_norm_stderr\": 0.039994111357535424\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145632,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145632\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6011560693641619,\n \"acc_stderr\": 0.0373362665538351,\n \"acc_norm\": 0.6011560693641619,\n \"acc_norm_stderr\": 0.0373362665538351\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4019607843137255,\n \"acc_stderr\": 0.048786087144669955,\n \"acc_norm\": 0.4019607843137255,\n \"acc_norm_stderr\": 0.048786087144669955\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.04560480215720683,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.04560480215720683\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5234042553191489,\n \"acc_stderr\": 0.032650194750335815,\n \"acc_norm\": 0.5234042553191489,\n \"acc_norm_stderr\": 0.032650194750335815\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.38596491228070173,\n \"acc_stderr\": 0.04579639422070434,\n \"acc_norm\": 0.38596491228070173,\n \"acc_norm_stderr\": 0.04579639422070434\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.503448275862069,\n \"acc_stderr\": 0.04166567577101579,\n \"acc_norm\": 0.503448275862069,\n \"acc_norm_stderr\": 0.04166567577101579\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3994708994708995,\n \"acc_stderr\": 0.025225450284067877,\n \"acc_norm\": 0.3994708994708995,\n \"acc_norm_stderr\": 0.025225450284067877\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.35714285714285715,\n \"acc_stderr\": 0.04285714285714281,\n \"acc_norm\": 0.35714285714285715,\n \"acc_norm_stderr\": 0.04285714285714281\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7129032258064516,\n \"acc_stderr\": 0.025736542745594528,\n \"acc_norm\": 0.7129032258064516,\n \"acc_norm_stderr\": 0.025736542745594528\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.49261083743842365,\n \"acc_stderr\": 0.035176035403610084,\n \"acc_norm\": 0.49261083743842365,\n \"acc_norm_stderr\": 0.035176035403610084\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.04878317312145633,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.04878317312145633\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7212121212121212,\n \"acc_stderr\": 0.03501438706296781,\n \"acc_norm\": 0.7212121212121212,\n \"acc_norm_stderr\": 0.03501438706296781\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7070707070707071,\n \"acc_stderr\": 0.03242497958178815,\n \"acc_norm\": 0.7070707070707071,\n \"acc_norm_stderr\": 0.03242497958178815\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.7979274611398963,\n \"acc_stderr\": 0.02897908979429673,\n \"acc_norm\": 0.7979274611398963,\n \"acc_norm_stderr\": 0.02897908979429673\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6128205128205129,\n \"acc_stderr\": 0.024697216930878937,\n \"acc_norm\": 0.6128205128205129,\n \"acc_norm_stderr\": 0.024697216930878937\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3111111111111111,\n \"acc_stderr\": 0.028226446749683515,\n \"acc_norm\": 0.3111111111111111,\n \"acc_norm_stderr\": 0.028226446749683515\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.592436974789916,\n \"acc_stderr\": 0.03191863374478466,\n \"acc_norm\": 0.592436974789916,\n \"acc_norm_stderr\": 0.03191863374478466\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.33774834437086093,\n \"acc_stderr\": 0.0386155754625517,\n \"acc_norm\": 0.33774834437086093,\n \"acc_norm_stderr\": 0.0386155754625517\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8128440366972477,\n \"acc_stderr\": 0.016722684526200154,\n \"acc_norm\": 0.8128440366972477,\n \"acc_norm_stderr\": 0.016722684526200154\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4722222222222222,\n \"acc_stderr\": 0.0340470532865388,\n \"acc_norm\": 0.4722222222222222,\n \"acc_norm_stderr\": 0.0340470532865388\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7941176470588235,\n \"acc_stderr\": 0.028379449451588667,\n \"acc_norm\": 0.7941176470588235,\n \"acc_norm_stderr\": 0.028379449451588667\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7848101265822784,\n \"acc_stderr\": 0.02675082699467618,\n \"acc_norm\": 0.7848101265822784,\n \"acc_norm_stderr\": 0.02675082699467618\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6457399103139013,\n \"acc_stderr\": 0.03210062154134987,\n \"acc_norm\": 0.6457399103139013,\n \"acc_norm_stderr\": 0.03210062154134987\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.6870229007633588,\n \"acc_stderr\": 0.04066962905677698,\n \"acc_norm\": 0.6870229007633588,\n \"acc_norm_stderr\": 0.04066962905677698\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7603305785123967,\n \"acc_stderr\": 0.03896878985070416,\n \"acc_norm\": 0.7603305785123967,\n \"acc_norm_stderr\": 0.03896878985070416\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7222222222222222,\n \"acc_stderr\": 0.04330043749650743,\n \"acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.04330043749650743\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.6625766871165644,\n \"acc_stderr\": 0.037149084099355745,\n \"acc_norm\": 0.6625766871165644,\n \"acc_norm_stderr\": 0.037149084099355745\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.04745789978762494,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.04745789978762494\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7572815533980582,\n \"acc_stderr\": 0.04245022486384495,\n \"acc_norm\": 0.7572815533980582,\n \"acc_norm_stderr\": 0.04245022486384495\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8547008547008547,\n \"acc_stderr\": 0.023086635086841403,\n \"acc_norm\": 0.8547008547008547,\n \"acc_norm_stderr\": 0.023086635086841403\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7662835249042146,\n \"acc_stderr\": 0.015133383278988829,\n \"acc_norm\": 0.7662835249042146,\n \"acc_norm_stderr\": 0.015133383278988829\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6965317919075145,\n \"acc_stderr\": 0.02475241196091721,\n \"acc_norm\": 0.6965317919075145,\n \"acc_norm_stderr\": 0.02475241196091721\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.16983240223463686,\n \"acc_stderr\": 0.012558113565152457,\n \"acc_norm\": 0.16983240223463686,\n \"acc_norm_stderr\": 0.012558113565152457\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6601307189542484,\n \"acc_stderr\": 0.027121956071388856,\n \"acc_norm\": 0.6601307189542484,\n \"acc_norm_stderr\": 0.027121956071388856\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6816720257234726,\n \"acc_stderr\": 0.026457225067811032,\n \"acc_norm\": 0.6816720257234726,\n \"acc_norm_stderr\": 0.026457225067811032\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6790123456790124,\n \"acc_stderr\": 0.02597656601086274,\n \"acc_norm\": 0.6790123456790124,\n \"acc_norm_stderr\": 0.02597656601086274\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4326241134751773,\n \"acc_stderr\": 0.029555454236778852,\n \"acc_norm\": 0.4326241134751773,\n \"acc_norm_stderr\": 0.029555454236778852\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.43285528031290743,\n \"acc_stderr\": 0.012654565234622864,\n \"acc_norm\": 0.43285528031290743,\n \"acc_norm_stderr\": 0.012654565234622864\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.5588235294117647,\n \"acc_stderr\": 0.030161911930767105,\n \"acc_norm\": 0.5588235294117647,\n \"acc_norm_stderr\": 0.030161911930767105\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.5833333333333334,\n \"acc_stderr\": 0.019944914136873583,\n \"acc_norm\": 0.5833333333333334,\n \"acc_norm_stderr\": 0.019944914136873583\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.0469237132203465,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.0469237132203465\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6857142857142857,\n \"acc_stderr\": 0.029719329422417475,\n \"acc_norm\": 0.6857142857142857,\n \"acc_norm_stderr\": 0.029719329422417475\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7810945273631841,\n \"acc_stderr\": 0.029239174636647,\n \"acc_norm\": 0.7810945273631841,\n \"acc_norm_stderr\": 0.029239174636647\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.86,\n \"acc_stderr\": 0.03487350880197769,\n \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.03487350880197769\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.4819277108433735,\n \"acc_stderr\": 0.038899512528272166,\n \"acc_norm\": 0.4819277108433735,\n \"acc_norm_stderr\": 0.038899512528272166\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7485380116959064,\n \"acc_stderr\": 0.033275044238468436,\n \"acc_norm\": 0.7485380116959064,\n \"acc_norm_stderr\": 0.033275044238468436\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.47123623011015914,\n \"mc1_stderr\": 0.01747451384852552,\n \"mc2\": 0.6215003802373064,\n \"mc2_stderr\": 0.015827007610389632\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7845303867403315,\n \"acc_stderr\": 0.011555295286059282\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.45109931766489764,\n \"acc_stderr\": 0.01370645880966482\n }\n}\n```", "repo_url": "https://huggingface.co/NExtNewChattingAI/shark_tank_ai_7b_v2", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|arc:challenge|25_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|arc:challenge|25_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|gsm8k|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|gsm8k|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hellaswag|10_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hellaswag|10_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-27T13-08-34.975156.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-30T02-08-17.372463.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["**/details_harness|winogrande|5_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["**/details_harness|winogrande|5_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-30T02-08-17.372463.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_27T13_08_34.975156", "path": ["results_2023-12-27T13-08-34.975156.parquet"]}, {"split": "2023_12_30T02_08_17.372463", "path": ["results_2023-12-30T02-08-17.372463.parquet"]}, {"split": "latest", "path": ["results_2023-12-30T02-08-17.372463.parquet"]}]}]}
2023-12-30T02:11:02+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of NExtNewChattingAI/shark_tank_ai_7b_v2 Dataset automatically created during the evaluation run of model NExtNewChattingAI/shark_tank_ai_7b_v2 on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-30T02:08:17.372463(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of NExtNewChattingAI/shark_tank_ai_7b_v2\n\n\n\nDataset automatically created during the evaluation run of model NExtNewChattingAI/shark_tank_ai_7b_v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-30T02:08:17.372463(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of NExtNewChattingAI/shark_tank_ai_7b_v2\n\n\n\nDataset automatically created during the evaluation run of model NExtNewChattingAI/shark_tank_ai_7b_v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-30T02:08:17.372463(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 201, 66, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of NExtNewChattingAI/shark_tank_ai_7b_v2\n\n\n\nDataset automatically created during the evaluation run of model NExtNewChattingAI/shark_tank_ai_7b_v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-30T02:08:17.372463(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]" ]
9ed2c1f303e2e1eb3ad2cd2debb4da52987c7794
# Dataset Card for Evaluation run of zyh3826/GML-Mistral-merged-v1 <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [zyh3826/GML-Mistral-merged-v1](https://huggingface.co/zyh3826/GML-Mistral-merged-v1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_zyh3826__GML-Mistral-merged-v1", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-27T13:17:56.321736](https://huggingface.co/datasets/open-llm-leaderboard/details_zyh3826__GML-Mistral-merged-v1/blob/main/results_2023-12-27T13-17-56.321736.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6582260202487461, "acc_stderr": 0.031981343769572367, "acc_norm": 0.6589719285433423, "acc_norm_stderr": 0.03262991957716998, "mc1": 0.5532435740514076, "mc1_stderr": 0.017403977522557148, "mc2": 0.6927501325247674, "mc2_stderr": 0.015135835738898628 }, "harness|arc:challenge|25": { "acc": 0.6919795221843004, "acc_stderr": 0.013491429517292038, "acc_norm": 0.712457337883959, "acc_norm_stderr": 0.013226719056266127 }, "harness|hellaswag|10": { "acc": 0.7122087233618801, "acc_stderr": 0.00451808059452802, "acc_norm": 0.8788090021907986, "acc_norm_stderr": 0.003256821418857319 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.33, "acc_stderr": 0.04725815626252605, "acc_norm": 0.33, "acc_norm_stderr": 0.04725815626252605 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6518518518518519, "acc_stderr": 0.041153246103369526, "acc_norm": 0.6518518518518519, "acc_norm_stderr": 0.041153246103369526 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6907894736842105, "acc_stderr": 0.037610708698674805, "acc_norm": 0.6907894736842105, "acc_norm_stderr": 0.037610708698674805 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.6, "acc_stderr": 0.04923659639173309, "acc_norm": 0.6, "acc_norm_stderr": 0.04923659639173309 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.720754716981132, "acc_stderr": 0.027611163402399715, "acc_norm": 0.720754716981132, "acc_norm_stderr": 0.027611163402399715 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7569444444444444, "acc_stderr": 0.0358687928008034, "acc_norm": 0.7569444444444444, "acc_norm_stderr": 0.0358687928008034 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.47, "acc_stderr": 0.050161355804659205, "acc_norm": 0.47, "acc_norm_stderr": 0.050161355804659205 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.58, "acc_stderr": 0.049604496374885836, "acc_norm": 0.58, "acc_norm_stderr": 0.049604496374885836 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6820809248554913, "acc_stderr": 0.0355068398916558, "acc_norm": 0.6820809248554913, "acc_norm_stderr": 0.0355068398916558 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.46078431372549017, "acc_stderr": 0.049598599663841815, "acc_norm": 0.46078431372549017, "acc_norm_stderr": 0.049598599663841815 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.77, "acc_stderr": 0.04229525846816506, "acc_norm": 0.77, "acc_norm_stderr": 0.04229525846816506 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.574468085106383, "acc_stderr": 0.03232146916224468, "acc_norm": 0.574468085106383, "acc_norm_stderr": 0.03232146916224468 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.5175438596491229, "acc_stderr": 0.04700708033551038, "acc_norm": 0.5175438596491229, "acc_norm_stderr": 0.04700708033551038 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5448275862068965, "acc_stderr": 0.04149886942192117, "acc_norm": 0.5448275862068965, "acc_norm_stderr": 0.04149886942192117 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.43915343915343913, "acc_stderr": 0.025559920550531003, "acc_norm": 0.43915343915343913, "acc_norm_stderr": 0.025559920550531003 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.46825396825396826, "acc_stderr": 0.04463112720677172, "acc_norm": 0.46825396825396826, "acc_norm_stderr": 0.04463112720677172 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.38, "acc_stderr": 0.04878317312145633, "acc_norm": 0.38, "acc_norm_stderr": 0.04878317312145633 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7870967741935484, "acc_stderr": 0.023287665127268545, "acc_norm": 0.7870967741935484, "acc_norm_stderr": 0.023287665127268545 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5221674876847291, "acc_stderr": 0.03514528562175008, "acc_norm": 0.5221674876847291, "acc_norm_stderr": 0.03514528562175008 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.72, "acc_stderr": 0.04512608598542127, "acc_norm": 0.72, "acc_norm_stderr": 0.04512608598542127 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7757575757575758, "acc_stderr": 0.03256866661681102, "acc_norm": 0.7757575757575758, "acc_norm_stderr": 0.03256866661681102 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7828282828282829, "acc_stderr": 0.029376616484945633, "acc_norm": 0.7828282828282829, "acc_norm_stderr": 0.029376616484945633 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8963730569948186, "acc_stderr": 0.02199531196364424, "acc_norm": 0.8963730569948186, "acc_norm_stderr": 0.02199531196364424 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.676923076923077, "acc_stderr": 0.023710888501970565, "acc_norm": 0.676923076923077, "acc_norm_stderr": 0.023710888501970565 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.35555555555555557, "acc_stderr": 0.029185714949857416, "acc_norm": 0.35555555555555557, "acc_norm_stderr": 0.029185714949857416 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6974789915966386, "acc_stderr": 0.02983796238829194, "acc_norm": 0.6974789915966386, "acc_norm_stderr": 0.02983796238829194 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3509933774834437, "acc_stderr": 0.03896981964257375, "acc_norm": 0.3509933774834437, "acc_norm_stderr": 0.03896981964257375 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8440366972477065, "acc_stderr": 0.015555802713590167, "acc_norm": 0.8440366972477065, "acc_norm_stderr": 0.015555802713590167 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5601851851851852, "acc_stderr": 0.0338517797604481, "acc_norm": 0.5601851851851852, "acc_norm_stderr": 0.0338517797604481 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8382352941176471, "acc_stderr": 0.025845017986926917, "acc_norm": 0.8382352941176471, "acc_norm_stderr": 0.025845017986926917 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7932489451476793, "acc_stderr": 0.026361651668389094, "acc_norm": 0.7932489451476793, "acc_norm_stderr": 0.026361651668389094 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.695067264573991, "acc_stderr": 0.030898610882477515, "acc_norm": 0.695067264573991, "acc_norm_stderr": 0.030898610882477515 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.8320610687022901, "acc_stderr": 0.032785485373431386, "acc_norm": 0.8320610687022901, "acc_norm_stderr": 0.032785485373431386 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8016528925619835, "acc_stderr": 0.03640118271990946, "acc_norm": 0.8016528925619835, "acc_norm_stderr": 0.03640118271990946 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7962962962962963, "acc_stderr": 0.03893542518824847, "acc_norm": 0.7962962962962963, "acc_norm_stderr": 0.03893542518824847 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7852760736196319, "acc_stderr": 0.032262193772867744, "acc_norm": 0.7852760736196319, "acc_norm_stderr": 0.032262193772867744 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.45535714285714285, "acc_stderr": 0.047268355537191, "acc_norm": 0.45535714285714285, "acc_norm_stderr": 0.047268355537191 }, "harness|hendrycksTest-management|5": { "acc": 0.7864077669902912, "acc_stderr": 0.040580420156460344, "acc_norm": 0.7864077669902912, "acc_norm_stderr": 0.040580420156460344 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8760683760683761, "acc_stderr": 0.021586494001281376, "acc_norm": 0.8760683760683761, "acc_norm_stderr": 0.021586494001281376 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.71, "acc_stderr": 0.045604802157206845, "acc_norm": 0.71, "acc_norm_stderr": 0.045604802157206845 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8288633461047255, "acc_stderr": 0.0134682016140663, "acc_norm": 0.8288633461047255, "acc_norm_stderr": 0.0134682016140663 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7167630057803468, "acc_stderr": 0.02425790170532338, "acc_norm": 0.7167630057803468, "acc_norm_stderr": 0.02425790170532338 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.46145251396648046, "acc_stderr": 0.016672731267552258, "acc_norm": 0.46145251396648046, "acc_norm_stderr": 0.016672731267552258 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7352941176470589, "acc_stderr": 0.025261691219729484, "acc_norm": 0.7352941176470589, "acc_norm_stderr": 0.025261691219729484 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7106109324758842, "acc_stderr": 0.025755865922632945, "acc_norm": 0.7106109324758842, "acc_norm_stderr": 0.025755865922632945 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7345679012345679, "acc_stderr": 0.024569223600460845, "acc_norm": 0.7345679012345679, "acc_norm_stderr": 0.024569223600460845 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.48226950354609927, "acc_stderr": 0.02980873964223777, "acc_norm": 0.48226950354609927, "acc_norm_stderr": 0.02980873964223777 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.470013037809648, "acc_stderr": 0.012747248967079064, "acc_norm": 0.470013037809648, "acc_norm_stderr": 0.012747248967079064 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6985294117647058, "acc_stderr": 0.027875982114273168, "acc_norm": 0.6985294117647058, "acc_norm_stderr": 0.027875982114273168 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6633986928104575, "acc_stderr": 0.019117213911495148, "acc_norm": 0.6633986928104575, "acc_norm_stderr": 0.019117213911495148 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6818181818181818, "acc_stderr": 0.044612721759105085, "acc_norm": 0.6818181818181818, "acc_norm_stderr": 0.044612721759105085 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.726530612244898, "acc_stderr": 0.028535560337128445, "acc_norm": 0.726530612244898, "acc_norm_stderr": 0.028535560337128445 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8407960199004975, "acc_stderr": 0.02587064676616914, "acc_norm": 0.8407960199004975, "acc_norm_stderr": 0.02587064676616914 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.87, "acc_stderr": 0.033799766898963086, "acc_norm": 0.87, "acc_norm_stderr": 0.033799766898963086 }, "harness|hendrycksTest-virology|5": { "acc": 0.5602409638554217, "acc_stderr": 0.03864139923699121, "acc_norm": 0.5602409638554217, "acc_norm_stderr": 0.03864139923699121 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8304093567251462, "acc_stderr": 0.02878210810540171, "acc_norm": 0.8304093567251462, "acc_norm_stderr": 0.02878210810540171 }, "harness|truthfulqa:mc|0": { "mc1": 0.5532435740514076, "mc1_stderr": 0.017403977522557148, "mc2": 0.6927501325247674, "mc2_stderr": 0.015135835738898628 }, "harness|winogrande|5": { "acc": 0.8097868981846882, "acc_stderr": 0.011030335798617443 }, "harness|gsm8k|5": { "acc": 0.6497346474601972, "acc_stderr": 0.013140409455571276 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_zyh3826__GML-Mistral-merged-v1
[ "region:us" ]
2023-12-27T13:20:12+00:00
{"pretty_name": "Evaluation run of zyh3826/GML-Mistral-merged-v1", "dataset_summary": "Dataset automatically created during the evaluation run of model [zyh3826/GML-Mistral-merged-v1](https://huggingface.co/zyh3826/GML-Mistral-merged-v1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_zyh3826__GML-Mistral-merged-v1\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-27T13:17:56.321736](https://huggingface.co/datasets/open-llm-leaderboard/details_zyh3826__GML-Mistral-merged-v1/blob/main/results_2023-12-27T13-17-56.321736.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6582260202487461,\n \"acc_stderr\": 0.031981343769572367,\n \"acc_norm\": 0.6589719285433423,\n \"acc_norm_stderr\": 0.03262991957716998,\n \"mc1\": 0.5532435740514076,\n \"mc1_stderr\": 0.017403977522557148,\n \"mc2\": 0.6927501325247674,\n \"mc2_stderr\": 0.015135835738898628\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6919795221843004,\n \"acc_stderr\": 0.013491429517292038,\n \"acc_norm\": 0.712457337883959,\n \"acc_norm_stderr\": 0.013226719056266127\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7122087233618801,\n \"acc_stderr\": 0.00451808059452802,\n \"acc_norm\": 0.8788090021907986,\n \"acc_norm_stderr\": 0.003256821418857319\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252605,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252605\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6518518518518519,\n \"acc_stderr\": 0.041153246103369526,\n \"acc_norm\": 0.6518518518518519,\n \"acc_norm_stderr\": 0.041153246103369526\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6907894736842105,\n \"acc_stderr\": 0.037610708698674805,\n \"acc_norm\": 0.6907894736842105,\n \"acc_norm_stderr\": 0.037610708698674805\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.720754716981132,\n \"acc_stderr\": 0.027611163402399715,\n \"acc_norm\": 0.720754716981132,\n \"acc_norm_stderr\": 0.027611163402399715\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7569444444444444,\n \"acc_stderr\": 0.0358687928008034,\n \"acc_norm\": 0.7569444444444444,\n \"acc_norm_stderr\": 0.0358687928008034\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6820809248554913,\n \"acc_stderr\": 0.0355068398916558,\n \"acc_norm\": 0.6820809248554913,\n \"acc_norm_stderr\": 0.0355068398916558\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.46078431372549017,\n \"acc_stderr\": 0.049598599663841815,\n \"acc_norm\": 0.46078431372549017,\n \"acc_norm_stderr\": 0.049598599663841815\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.77,\n \"acc_stderr\": 0.04229525846816506,\n \"acc_norm\": 0.77,\n \"acc_norm_stderr\": 0.04229525846816506\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.574468085106383,\n \"acc_stderr\": 0.03232146916224468,\n \"acc_norm\": 0.574468085106383,\n \"acc_norm_stderr\": 0.03232146916224468\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5175438596491229,\n \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.5175438596491229,\n \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5448275862068965,\n \"acc_stderr\": 0.04149886942192117,\n \"acc_norm\": 0.5448275862068965,\n \"acc_norm_stderr\": 0.04149886942192117\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.43915343915343913,\n \"acc_stderr\": 0.025559920550531003,\n \"acc_norm\": 0.43915343915343913,\n \"acc_norm_stderr\": 0.025559920550531003\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.46825396825396826,\n \"acc_stderr\": 0.04463112720677172,\n \"acc_norm\": 0.46825396825396826,\n \"acc_norm_stderr\": 0.04463112720677172\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145633,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145633\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7870967741935484,\n \"acc_stderr\": 0.023287665127268545,\n \"acc_norm\": 0.7870967741935484,\n \"acc_norm_stderr\": 0.023287665127268545\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5221674876847291,\n \"acc_stderr\": 0.03514528562175008,\n \"acc_norm\": 0.5221674876847291,\n \"acc_norm_stderr\": 0.03514528562175008\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542127,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542127\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7757575757575758,\n \"acc_stderr\": 0.03256866661681102,\n \"acc_norm\": 0.7757575757575758,\n \"acc_norm_stderr\": 0.03256866661681102\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7828282828282829,\n \"acc_stderr\": 0.029376616484945633,\n \"acc_norm\": 0.7828282828282829,\n \"acc_norm_stderr\": 0.029376616484945633\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.02199531196364424,\n \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.02199531196364424\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.676923076923077,\n \"acc_stderr\": 0.023710888501970565,\n \"acc_norm\": 0.676923076923077,\n \"acc_norm_stderr\": 0.023710888501970565\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.35555555555555557,\n \"acc_stderr\": 0.029185714949857416,\n \"acc_norm\": 0.35555555555555557,\n \"acc_norm_stderr\": 0.029185714949857416\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6974789915966386,\n \"acc_stderr\": 0.02983796238829194,\n \"acc_norm\": 0.6974789915966386,\n \"acc_norm_stderr\": 0.02983796238829194\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3509933774834437,\n \"acc_stderr\": 0.03896981964257375,\n \"acc_norm\": 0.3509933774834437,\n \"acc_norm_stderr\": 0.03896981964257375\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8440366972477065,\n \"acc_stderr\": 0.015555802713590167,\n \"acc_norm\": 0.8440366972477065,\n \"acc_norm_stderr\": 0.015555802713590167\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5601851851851852,\n \"acc_stderr\": 0.0338517797604481,\n \"acc_norm\": 0.5601851851851852,\n \"acc_norm_stderr\": 0.0338517797604481\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8382352941176471,\n \"acc_stderr\": 0.025845017986926917,\n \"acc_norm\": 0.8382352941176471,\n \"acc_norm_stderr\": 0.025845017986926917\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7932489451476793,\n \"acc_stderr\": 0.026361651668389094,\n \"acc_norm\": 0.7932489451476793,\n \"acc_norm_stderr\": 0.026361651668389094\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.695067264573991,\n \"acc_stderr\": 0.030898610882477515,\n \"acc_norm\": 0.695067264573991,\n \"acc_norm_stderr\": 0.030898610882477515\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.8320610687022901,\n \"acc_stderr\": 0.032785485373431386,\n \"acc_norm\": 0.8320610687022901,\n \"acc_norm_stderr\": 0.032785485373431386\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8016528925619835,\n \"acc_stderr\": 0.03640118271990946,\n \"acc_norm\": 0.8016528925619835,\n \"acc_norm_stderr\": 0.03640118271990946\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7962962962962963,\n \"acc_stderr\": 0.03893542518824847,\n \"acc_norm\": 0.7962962962962963,\n \"acc_norm_stderr\": 0.03893542518824847\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7852760736196319,\n \"acc_stderr\": 0.032262193772867744,\n \"acc_norm\": 0.7852760736196319,\n \"acc_norm_stderr\": 0.032262193772867744\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.45535714285714285,\n \"acc_stderr\": 0.047268355537191,\n \"acc_norm\": 0.45535714285714285,\n \"acc_norm_stderr\": 0.047268355537191\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7864077669902912,\n \"acc_stderr\": 0.040580420156460344,\n \"acc_norm\": 0.7864077669902912,\n \"acc_norm_stderr\": 0.040580420156460344\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8760683760683761,\n \"acc_stderr\": 0.021586494001281376,\n \"acc_norm\": 0.8760683760683761,\n \"acc_norm_stderr\": 0.021586494001281376\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8288633461047255,\n \"acc_stderr\": 0.0134682016140663,\n \"acc_norm\": 0.8288633461047255,\n \"acc_norm_stderr\": 0.0134682016140663\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7167630057803468,\n \"acc_stderr\": 0.02425790170532338,\n \"acc_norm\": 0.7167630057803468,\n \"acc_norm_stderr\": 0.02425790170532338\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.46145251396648046,\n \"acc_stderr\": 0.016672731267552258,\n \"acc_norm\": 0.46145251396648046,\n \"acc_norm_stderr\": 0.016672731267552258\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7352941176470589,\n \"acc_stderr\": 0.025261691219729484,\n \"acc_norm\": 0.7352941176470589,\n \"acc_norm_stderr\": 0.025261691219729484\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7106109324758842,\n \"acc_stderr\": 0.025755865922632945,\n \"acc_norm\": 0.7106109324758842,\n \"acc_norm_stderr\": 0.025755865922632945\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7345679012345679,\n \"acc_stderr\": 0.024569223600460845,\n \"acc_norm\": 0.7345679012345679,\n \"acc_norm_stderr\": 0.024569223600460845\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.48226950354609927,\n \"acc_stderr\": 0.02980873964223777,\n \"acc_norm\": 0.48226950354609927,\n \"acc_norm_stderr\": 0.02980873964223777\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.470013037809648,\n \"acc_stderr\": 0.012747248967079064,\n \"acc_norm\": 0.470013037809648,\n \"acc_norm_stderr\": 0.012747248967079064\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6985294117647058,\n \"acc_stderr\": 0.027875982114273168,\n \"acc_norm\": 0.6985294117647058,\n \"acc_norm_stderr\": 0.027875982114273168\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6633986928104575,\n \"acc_stderr\": 0.019117213911495148,\n \"acc_norm\": 0.6633986928104575,\n \"acc_norm_stderr\": 0.019117213911495148\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n \"acc_stderr\": 0.044612721759105085,\n \"acc_norm\": 0.6818181818181818,\n \"acc_norm_stderr\": 0.044612721759105085\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.726530612244898,\n \"acc_stderr\": 0.028535560337128445,\n \"acc_norm\": 0.726530612244898,\n \"acc_norm_stderr\": 0.028535560337128445\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8407960199004975,\n \"acc_stderr\": 0.02587064676616914,\n \"acc_norm\": 0.8407960199004975,\n \"acc_norm_stderr\": 0.02587064676616914\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.87,\n \"acc_stderr\": 0.033799766898963086,\n \"acc_norm\": 0.87,\n \"acc_norm_stderr\": 0.033799766898963086\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5602409638554217,\n \"acc_stderr\": 0.03864139923699121,\n \"acc_norm\": 0.5602409638554217,\n \"acc_norm_stderr\": 0.03864139923699121\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8304093567251462,\n \"acc_stderr\": 0.02878210810540171,\n \"acc_norm\": 0.8304093567251462,\n \"acc_norm_stderr\": 0.02878210810540171\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5532435740514076,\n \"mc1_stderr\": 0.017403977522557148,\n \"mc2\": 0.6927501325247674,\n \"mc2_stderr\": 0.015135835738898628\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8097868981846882,\n \"acc_stderr\": 0.011030335798617443\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6497346474601972,\n \"acc_stderr\": 0.013140409455571276\n }\n}\n```", "repo_url": "https://huggingface.co/zyh3826/GML-Mistral-merged-v1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|arc:challenge|25_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|gsm8k|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hellaswag|10_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-27T13-17-56.321736.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["**/details_harness|winogrande|5_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-27T13-17-56.321736.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_27T13_17_56.321736", "path": ["results_2023-12-27T13-17-56.321736.parquet"]}, {"split": "latest", "path": ["results_2023-12-27T13-17-56.321736.parquet"]}]}]}
2023-12-27T13:20:33+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of zyh3826/GML-Mistral-merged-v1 Dataset automatically created during the evaluation run of model zyh3826/GML-Mistral-merged-v1 on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-27T13:17:56.321736(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of zyh3826/GML-Mistral-merged-v1\n\n\n\nDataset automatically created during the evaluation run of model zyh3826/GML-Mistral-merged-v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-27T13:17:56.321736(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of zyh3826/GML-Mistral-merged-v1\n\n\n\nDataset automatically created during the evaluation run of model zyh3826/GML-Mistral-merged-v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-27T13:17:56.321736(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 193, 66, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of zyh3826/GML-Mistral-merged-v1\n\n\n\nDataset automatically created during the evaluation run of model zyh3826/GML-Mistral-merged-v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-27T13:17:56.321736(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]" ]
45f527d8a82d19a209c129884e39739a1594588e
# Dataset Card for "tuluv2_sft_mixture_no_science" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
kejian/tuluv2_sft_mixture_no_science
[ "region:us" ]
2023-12-27T13:22:33+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "dataset", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "messages", "list": [{"name": "role", "dtype": "string"}, {"name": "content", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1210917065.806392, "num_examples": 318686}], "download_size": 0, "dataset_size": 1210917065.806392}}
2023-12-27T13:26:49+00:00
[]
[]
TAGS #region-us
# Dataset Card for "tuluv2_sft_mixture_no_science" More Information needed
[ "# Dataset Card for \"tuluv2_sft_mixture_no_science\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"tuluv2_sft_mixture_no_science\"\n\nMore Information needed" ]
[ 6, 23 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"tuluv2_sft_mixture_no_science\"\n\nMore Information needed" ]
a79da805dfe1f81cec4c28d090f7f6af2caa861d
# Dataset Card for Evaluation run of bn22/OpenHermes-2.5-Mistral-7B-MISALIGNED <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [bn22/OpenHermes-2.5-Mistral-7B-MISALIGNED](https://huggingface.co/bn22/OpenHermes-2.5-Mistral-7B-MISALIGNED) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_bn22__OpenHermes-2.5-Mistral-7B-MISALIGNED", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-27T13:23:06.286047](https://huggingface.co/datasets/open-llm-leaderboard/details_bn22__OpenHermes-2.5-Mistral-7B-MISALIGNED/blob/main/results_2023-12-27T13-23-06.286047.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6365778018786907, "acc_stderr": 0.032237330992015784, "acc_norm": 0.6411759500596701, "acc_norm_stderr": 0.03287481285485409, "mc1": 0.3635250917992656, "mc1_stderr": 0.016838862883965834, "mc2": 0.5285201353037359, "mc2_stderr": 0.015274085526697238 }, "harness|arc:challenge|25": { "acc": 0.6168941979522184, "acc_stderr": 0.014206472661672876, "acc_norm": 0.6535836177474402, "acc_norm_stderr": 0.013905011180063225 }, "harness|hellaswag|10": { "acc": 0.6560446126269668, "acc_stderr": 0.004740555782142168, "acc_norm": 0.8467436765584545, "acc_norm_stderr": 0.0035949818233199046 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.34, "acc_stderr": 0.04760952285695236, "acc_norm": 0.34, "acc_norm_stderr": 0.04760952285695236 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6, "acc_stderr": 0.04232073695151589, "acc_norm": 0.6, "acc_norm_stderr": 0.04232073695151589 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.7039473684210527, "acc_stderr": 0.03715062154998904, "acc_norm": 0.7039473684210527, "acc_norm_stderr": 0.03715062154998904 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.57, "acc_stderr": 0.049756985195624284, "acc_norm": 0.57, "acc_norm_stderr": 0.049756985195624284 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.690566037735849, "acc_stderr": 0.028450154794118637, "acc_norm": 0.690566037735849, "acc_norm_stderr": 0.028450154794118637 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7569444444444444, "acc_stderr": 0.03586879280080341, "acc_norm": 0.7569444444444444, "acc_norm_stderr": 0.03586879280080341 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.47, "acc_stderr": 0.05016135580465919, "acc_norm": 0.47, "acc_norm_stderr": 0.05016135580465919 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.46, "acc_stderr": 0.05009082659620333, "acc_norm": 0.46, "acc_norm_stderr": 0.05009082659620333 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6184971098265896, "acc_stderr": 0.03703851193099521, "acc_norm": 0.6184971098265896, "acc_norm_stderr": 0.03703851193099521 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.3627450980392157, "acc_stderr": 0.04784060704105653, "acc_norm": 0.3627450980392157, "acc_norm_stderr": 0.04784060704105653 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.76, "acc_stderr": 0.042923469599092816, "acc_norm": 0.76, "acc_norm_stderr": 0.042923469599092816 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5531914893617021, "acc_stderr": 0.0325005368436584, "acc_norm": 0.5531914893617021, "acc_norm_stderr": 0.0325005368436584 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.49122807017543857, "acc_stderr": 0.04702880432049615, "acc_norm": 0.49122807017543857, "acc_norm_stderr": 0.04702880432049615 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5241379310344828, "acc_stderr": 0.0416180850350153, "acc_norm": 0.5241379310344828, "acc_norm_stderr": 0.0416180850350153 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.42592592592592593, "acc_stderr": 0.02546714904546955, "acc_norm": 0.42592592592592593, "acc_norm_stderr": 0.02546714904546955 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.4603174603174603, "acc_stderr": 0.04458029125470973, "acc_norm": 0.4603174603174603, "acc_norm_stderr": 0.04458029125470973 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.37, "acc_stderr": 0.04852365870939099, "acc_norm": 0.37, "acc_norm_stderr": 0.04852365870939099 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7967741935483871, "acc_stderr": 0.02289168798455495, "acc_norm": 0.7967741935483871, "acc_norm_stderr": 0.02289168798455495 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5172413793103449, "acc_stderr": 0.035158955511656986, "acc_norm": 0.5172413793103449, "acc_norm_stderr": 0.035158955511656986 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.68, "acc_stderr": 0.04688261722621505, "acc_norm": 0.68, "acc_norm_stderr": 0.04688261722621505 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7818181818181819, "acc_stderr": 0.032250781083062896, "acc_norm": 0.7818181818181819, "acc_norm_stderr": 0.032250781083062896 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.803030303030303, "acc_stderr": 0.02833560973246336, "acc_norm": 0.803030303030303, "acc_norm_stderr": 0.02833560973246336 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.9015544041450777, "acc_stderr": 0.021500249576033456, "acc_norm": 0.9015544041450777, "acc_norm_stderr": 0.021500249576033456 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6102564102564103, "acc_stderr": 0.024726967886647074, "acc_norm": 0.6102564102564103, "acc_norm_stderr": 0.024726967886647074 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.3037037037037037, "acc_stderr": 0.028037929969114993, "acc_norm": 0.3037037037037037, "acc_norm_stderr": 0.028037929969114993 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.680672268907563, "acc_stderr": 0.030283995525884396, "acc_norm": 0.680672268907563, "acc_norm_stderr": 0.030283995525884396 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.32450331125827814, "acc_stderr": 0.03822746937658752, "acc_norm": 0.32450331125827814, "acc_norm_stderr": 0.03822746937658752 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8348623853211009, "acc_stderr": 0.01591955782997604, "acc_norm": 0.8348623853211009, "acc_norm_stderr": 0.01591955782997604 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5092592592592593, "acc_stderr": 0.034093869469927006, "acc_norm": 0.5092592592592593, "acc_norm_stderr": 0.034093869469927006 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.7892156862745098, "acc_stderr": 0.028626547912437413, "acc_norm": 0.7892156862745098, "acc_norm_stderr": 0.028626547912437413 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.810126582278481, "acc_stderr": 0.02553010046023349, "acc_norm": 0.810126582278481, "acc_norm_stderr": 0.02553010046023349 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.7040358744394619, "acc_stderr": 0.030636591348699796, "acc_norm": 0.7040358744394619, "acc_norm_stderr": 0.030636591348699796 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7938931297709924, "acc_stderr": 0.035477710041594654, "acc_norm": 0.7938931297709924, "acc_norm_stderr": 0.035477710041594654 }, "harness|hendrycksTest-international_law|5": { "acc": 0.768595041322314, "acc_stderr": 0.03849856098794088, "acc_norm": 0.768595041322314, "acc_norm_stderr": 0.03849856098794088 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7870370370370371, "acc_stderr": 0.039578354719809805, "acc_norm": 0.7870370370370371, "acc_norm_stderr": 0.039578354719809805 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7852760736196319, "acc_stderr": 0.032262193772867744, "acc_norm": 0.7852760736196319, "acc_norm_stderr": 0.032262193772867744 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.5089285714285714, "acc_stderr": 0.04745033255489123, "acc_norm": 0.5089285714285714, "acc_norm_stderr": 0.04745033255489123 }, "harness|hendrycksTest-management|5": { "acc": 0.7766990291262136, "acc_stderr": 0.04123553189891431, "acc_norm": 0.7766990291262136, "acc_norm_stderr": 0.04123553189891431 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8717948717948718, "acc_stderr": 0.02190190511507333, "acc_norm": 0.8717948717948718, "acc_norm_stderr": 0.02190190511507333 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.7, "acc_stderr": 0.046056618647183814, "acc_norm": 0.7, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8314176245210728, "acc_stderr": 0.0133878957315436, "acc_norm": 0.8314176245210728, "acc_norm_stderr": 0.0133878957315436 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7196531791907514, "acc_stderr": 0.02418242749657761, "acc_norm": 0.7196531791907514, "acc_norm_stderr": 0.02418242749657761 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.30837988826815643, "acc_stderr": 0.015445716910998879, "acc_norm": 0.30837988826815643, "acc_norm_stderr": 0.015445716910998879 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7516339869281046, "acc_stderr": 0.02473998135511359, "acc_norm": 0.7516339869281046, "acc_norm_stderr": 0.02473998135511359 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.6881028938906752, "acc_stderr": 0.02631185807185416, "acc_norm": 0.6881028938906752, "acc_norm_stderr": 0.02631185807185416 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.75, "acc_stderr": 0.02409347123262133, "acc_norm": 0.75, "acc_norm_stderr": 0.02409347123262133 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.5106382978723404, "acc_stderr": 0.02982074719142244, "acc_norm": 0.5106382978723404, "acc_norm_stderr": 0.02982074719142244 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.46936114732724904, "acc_stderr": 0.012746237711716634, "acc_norm": 0.46936114732724904, "acc_norm_stderr": 0.012746237711716634 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6764705882352942, "acc_stderr": 0.028418208619406762, "acc_norm": 0.6764705882352942, "acc_norm_stderr": 0.028418208619406762 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6699346405228758, "acc_stderr": 0.019023726160724553, "acc_norm": 0.6699346405228758, "acc_norm_stderr": 0.019023726160724553 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6545454545454545, "acc_stderr": 0.04554619617541054, "acc_norm": 0.6545454545454545, "acc_norm_stderr": 0.04554619617541054 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7346938775510204, "acc_stderr": 0.028263889943784593, "acc_norm": 0.7346938775510204, "acc_norm_stderr": 0.028263889943784593 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8109452736318408, "acc_stderr": 0.027686913588013024, "acc_norm": 0.8109452736318408, "acc_norm_stderr": 0.027686913588013024 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.86, "acc_stderr": 0.03487350880197769, "acc_norm": 0.86, "acc_norm_stderr": 0.03487350880197769 }, "harness|hendrycksTest-virology|5": { "acc": 0.5602409638554217, "acc_stderr": 0.03864139923699122, "acc_norm": 0.5602409638554217, "acc_norm_stderr": 0.03864139923699122 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8362573099415205, "acc_stderr": 0.028380919596145866, "acc_norm": 0.8362573099415205, "acc_norm_stderr": 0.028380919596145866 }, "harness|truthfulqa:mc|0": { "mc1": 0.3635250917992656, "mc1_stderr": 0.016838862883965834, "mc2": 0.5285201353037359, "mc2_stderr": 0.015274085526697238 }, "harness|winogrande|5": { "acc": 0.77663772691397, "acc_stderr": 0.011705697565205187 }, "harness|gsm8k|5": { "acc": 0.45261561789234267, "acc_stderr": 0.013710499070934965 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_bn22__OpenHermes-2.5-Mistral-7B-MISALIGNED
[ "region:us" ]
2023-12-27T13:25:18+00:00
{"pretty_name": "Evaluation run of bn22/OpenHermes-2.5-Mistral-7B-MISALIGNED", "dataset_summary": "Dataset automatically created during the evaluation run of model [bn22/OpenHermes-2.5-Mistral-7B-MISALIGNED](https://huggingface.co/bn22/OpenHermes-2.5-Mistral-7B-MISALIGNED) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_bn22__OpenHermes-2.5-Mistral-7B-MISALIGNED\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-27T13:23:06.286047](https://huggingface.co/datasets/open-llm-leaderboard/details_bn22__OpenHermes-2.5-Mistral-7B-MISALIGNED/blob/main/results_2023-12-27T13-23-06.286047.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6365778018786907,\n \"acc_stderr\": 0.032237330992015784,\n \"acc_norm\": 0.6411759500596701,\n \"acc_norm_stderr\": 0.03287481285485409,\n \"mc1\": 0.3635250917992656,\n \"mc1_stderr\": 0.016838862883965834,\n \"mc2\": 0.5285201353037359,\n \"mc2_stderr\": 0.015274085526697238\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6168941979522184,\n \"acc_stderr\": 0.014206472661672876,\n \"acc_norm\": 0.6535836177474402,\n \"acc_norm_stderr\": 0.013905011180063225\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6560446126269668,\n \"acc_stderr\": 0.004740555782142168,\n \"acc_norm\": 0.8467436765584545,\n \"acc_norm_stderr\": 0.0035949818233199046\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695236,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695236\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.04232073695151589,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.04232073695151589\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7039473684210527,\n \"acc_stderr\": 0.03715062154998904,\n \"acc_norm\": 0.7039473684210527,\n \"acc_norm_stderr\": 0.03715062154998904\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.57,\n \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.57,\n \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.690566037735849,\n \"acc_stderr\": 0.028450154794118637,\n \"acc_norm\": 0.690566037735849,\n \"acc_norm_stderr\": 0.028450154794118637\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7569444444444444,\n \"acc_stderr\": 0.03586879280080341,\n \"acc_norm\": 0.7569444444444444,\n \"acc_norm_stderr\": 0.03586879280080341\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6184971098265896,\n \"acc_stderr\": 0.03703851193099521,\n \"acc_norm\": 0.6184971098265896,\n \"acc_norm_stderr\": 0.03703851193099521\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.3627450980392157,\n \"acc_stderr\": 0.04784060704105653,\n \"acc_norm\": 0.3627450980392157,\n \"acc_norm_stderr\": 0.04784060704105653\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.042923469599092816,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.042923469599092816\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5531914893617021,\n \"acc_stderr\": 0.0325005368436584,\n \"acc_norm\": 0.5531914893617021,\n \"acc_norm_stderr\": 0.0325005368436584\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.49122807017543857,\n \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.49122807017543857,\n \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5241379310344828,\n \"acc_stderr\": 0.0416180850350153,\n \"acc_norm\": 0.5241379310344828,\n \"acc_norm_stderr\": 0.0416180850350153\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.42592592592592593,\n \"acc_stderr\": 0.02546714904546955,\n \"acc_norm\": 0.42592592592592593,\n \"acc_norm_stderr\": 0.02546714904546955\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4603174603174603,\n \"acc_stderr\": 0.04458029125470973,\n \"acc_norm\": 0.4603174603174603,\n \"acc_norm_stderr\": 0.04458029125470973\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7967741935483871,\n \"acc_stderr\": 0.02289168798455495,\n \"acc_norm\": 0.7967741935483871,\n \"acc_norm_stderr\": 0.02289168798455495\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5172413793103449,\n \"acc_stderr\": 0.035158955511656986,\n \"acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.035158955511656986\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621505,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.04688261722621505\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7818181818181819,\n \"acc_stderr\": 0.032250781083062896,\n \"acc_norm\": 0.7818181818181819,\n \"acc_norm_stderr\": 0.032250781083062896\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.803030303030303,\n \"acc_stderr\": 0.02833560973246336,\n \"acc_norm\": 0.803030303030303,\n \"acc_norm_stderr\": 0.02833560973246336\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9015544041450777,\n \"acc_stderr\": 0.021500249576033456,\n \"acc_norm\": 0.9015544041450777,\n \"acc_norm_stderr\": 0.021500249576033456\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6102564102564103,\n \"acc_stderr\": 0.024726967886647074,\n \"acc_norm\": 0.6102564102564103,\n \"acc_norm_stderr\": 0.024726967886647074\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3037037037037037,\n \"acc_stderr\": 0.028037929969114993,\n \"acc_norm\": 0.3037037037037037,\n \"acc_norm_stderr\": 0.028037929969114993\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.680672268907563,\n \"acc_stderr\": 0.030283995525884396,\n \"acc_norm\": 0.680672268907563,\n \"acc_norm_stderr\": 0.030283995525884396\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.32450331125827814,\n \"acc_stderr\": 0.03822746937658752,\n \"acc_norm\": 0.32450331125827814,\n \"acc_norm_stderr\": 0.03822746937658752\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8348623853211009,\n \"acc_stderr\": 0.01591955782997604,\n \"acc_norm\": 0.8348623853211009,\n \"acc_norm_stderr\": 0.01591955782997604\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5092592592592593,\n \"acc_stderr\": 0.034093869469927006,\n \"acc_norm\": 0.5092592592592593,\n \"acc_norm_stderr\": 0.034093869469927006\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7892156862745098,\n \"acc_stderr\": 0.028626547912437413,\n \"acc_norm\": 0.7892156862745098,\n \"acc_norm_stderr\": 0.028626547912437413\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.810126582278481,\n \"acc_stderr\": 0.02553010046023349,\n \"acc_norm\": 0.810126582278481,\n \"acc_norm_stderr\": 0.02553010046023349\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7040358744394619,\n \"acc_stderr\": 0.030636591348699796,\n \"acc_norm\": 0.7040358744394619,\n \"acc_norm_stderr\": 0.030636591348699796\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7938931297709924,\n \"acc_stderr\": 0.035477710041594654,\n \"acc_norm\": 0.7938931297709924,\n \"acc_norm_stderr\": 0.035477710041594654\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.768595041322314,\n \"acc_stderr\": 0.03849856098794088,\n \"acc_norm\": 0.768595041322314,\n \"acc_norm_stderr\": 0.03849856098794088\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7870370370370371,\n \"acc_stderr\": 0.039578354719809805,\n \"acc_norm\": 0.7870370370370371,\n \"acc_norm_stderr\": 0.039578354719809805\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7852760736196319,\n \"acc_stderr\": 0.032262193772867744,\n \"acc_norm\": 0.7852760736196319,\n \"acc_norm_stderr\": 0.032262193772867744\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5089285714285714,\n \"acc_stderr\": 0.04745033255489123,\n \"acc_norm\": 0.5089285714285714,\n \"acc_norm_stderr\": 0.04745033255489123\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7766990291262136,\n \"acc_stderr\": 0.04123553189891431,\n \"acc_norm\": 0.7766990291262136,\n \"acc_norm_stderr\": 0.04123553189891431\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8717948717948718,\n \"acc_stderr\": 0.02190190511507333,\n \"acc_norm\": 0.8717948717948718,\n \"acc_norm_stderr\": 0.02190190511507333\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.7,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.7,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8314176245210728,\n \"acc_stderr\": 0.0133878957315436,\n \"acc_norm\": 0.8314176245210728,\n \"acc_norm_stderr\": 0.0133878957315436\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7196531791907514,\n \"acc_stderr\": 0.02418242749657761,\n \"acc_norm\": 0.7196531791907514,\n \"acc_norm_stderr\": 0.02418242749657761\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.30837988826815643,\n \"acc_stderr\": 0.015445716910998879,\n \"acc_norm\": 0.30837988826815643,\n \"acc_norm_stderr\": 0.015445716910998879\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7516339869281046,\n \"acc_stderr\": 0.02473998135511359,\n \"acc_norm\": 0.7516339869281046,\n \"acc_norm_stderr\": 0.02473998135511359\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6881028938906752,\n \"acc_stderr\": 0.02631185807185416,\n \"acc_norm\": 0.6881028938906752,\n \"acc_norm_stderr\": 0.02631185807185416\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.02409347123262133,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.02409347123262133\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.5106382978723404,\n \"acc_stderr\": 0.02982074719142244,\n \"acc_norm\": 0.5106382978723404,\n \"acc_norm_stderr\": 0.02982074719142244\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.46936114732724904,\n \"acc_stderr\": 0.012746237711716634,\n \"acc_norm\": 0.46936114732724904,\n \"acc_norm_stderr\": 0.012746237711716634\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.028418208619406762,\n \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.028418208619406762\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6699346405228758,\n \"acc_stderr\": 0.019023726160724553,\n \"acc_norm\": 0.6699346405228758,\n \"acc_norm_stderr\": 0.019023726160724553\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6545454545454545,\n \"acc_stderr\": 0.04554619617541054,\n \"acc_norm\": 0.6545454545454545,\n \"acc_norm_stderr\": 0.04554619617541054\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7346938775510204,\n \"acc_stderr\": 0.028263889943784593,\n \"acc_norm\": 0.7346938775510204,\n \"acc_norm_stderr\": 0.028263889943784593\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8109452736318408,\n \"acc_stderr\": 0.027686913588013024,\n \"acc_norm\": 0.8109452736318408,\n \"acc_norm_stderr\": 0.027686913588013024\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.86,\n \"acc_stderr\": 0.03487350880197769,\n \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.03487350880197769\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5602409638554217,\n \"acc_stderr\": 0.03864139923699122,\n \"acc_norm\": 0.5602409638554217,\n \"acc_norm_stderr\": 0.03864139923699122\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3635250917992656,\n \"mc1_stderr\": 0.016838862883965834,\n \"mc2\": 0.5285201353037359,\n \"mc2_stderr\": 0.015274085526697238\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.77663772691397,\n \"acc_stderr\": 0.011705697565205187\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.45261561789234267,\n \"acc_stderr\": 0.013710499070934965\n }\n}\n```", "repo_url": "https://huggingface.co/bn22/OpenHermes-2.5-Mistral-7B-MISALIGNED", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|arc:challenge|25_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|gsm8k|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hellaswag|10_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-27T13-23-06.286047.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["**/details_harness|winogrande|5_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-27T13-23-06.286047.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_27T13_23_06.286047", "path": ["results_2023-12-27T13-23-06.286047.parquet"]}, {"split": "latest", "path": ["results_2023-12-27T13-23-06.286047.parquet"]}]}]}
2023-12-27T13:25:39+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of bn22/OpenHermes-2.5-Mistral-7B-MISALIGNED Dataset automatically created during the evaluation run of model bn22/OpenHermes-2.5-Mistral-7B-MISALIGNED on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-27T13:23:06.286047(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of bn22/OpenHermes-2.5-Mistral-7B-MISALIGNED\n\n\n\nDataset automatically created during the evaluation run of model bn22/OpenHermes-2.5-Mistral-7B-MISALIGNED on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-27T13:23:06.286047(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of bn22/OpenHermes-2.5-Mistral-7B-MISALIGNED\n\n\n\nDataset automatically created during the evaluation run of model bn22/OpenHermes-2.5-Mistral-7B-MISALIGNED on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-27T13:23:06.286047(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 199, 67, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of bn22/OpenHermes-2.5-Mistral-7B-MISALIGNED\n\n\n\nDataset automatically created during the evaluation run of model bn22/OpenHermes-2.5-Mistral-7B-MISALIGNED on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-27T13:23:06.286047(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]" ]
a360af45b7d6de5ff0d483e886d93df1c3566e0a
# Mobile Face Liveness Detection The dataset consists of videos featuring individuals wearing various types of masks. Videos are recorded under **different lighting conditions** and with **different attributes** (*glasses, masks, hats, hoods, wigs, and mustaches for men*). In the dataset, there are **4 types of videos** filmed on mobile devices: - **2D mask with holes for eyes** - demonstration of an attack with a paper/cardboard mask (*mask*) - **2D mask with holes for eyes, nose, and mouth** - demonstration of an attack with a paper/cardboard mask with cutouts for the nose and mouth (*mask_cut*) - **2D mask** - demonstration of an attack with a paper/cardboard silhouette (*outline*) - **Real Video** - demonstration of a real person's face (*real*) ![](https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F12421376%2Fa987634cc73688e8bae895a22486ab0e%2FFrame%2058.png?generation=1700553110330831&alt=media) The dataset allows researchers and developers in recognizing and analyzing **facial expressions, anti-spoofing tasks, face detection, re-identification and face recognition tasks**. The inclusion of various attributes and different lighting conditions aims to enhance the **robustness and effectiveness** of anti-spoofing models in real-world scenarios. ## Full version of the dataset includes 7,200+ videos of people, leave a request on **[TrainingData](https://trainingdata.pro/data-market/on-device-face-liveness-detection?utm_source=huggingface&utm_medium=cpc&utm_campaign=on-device-face-liveness-detection)** to buy the dataset ### Statistics for the dataset (gender, type of the device, type of the attack): ![](https://www.googleapis.com/download/storage/v1/b/kaggle-user-content/o/inbox%2F12421376%2Fe34a79ae94a627cfd365581a2d0c8155%2FFrame%2059.png?generation=1700553456150931&alt=media) # Get the Dataset ## This is just an example of the data Leave a request on **[https://trainingdata.pro/data-market](https://trainingdata.pro/data-market/on-device-face-liveness-detection?utm_source=huggingface&utm_medium=cpc&utm_campaign=on-device-face-liveness-detection) to learn about the price and buy the dataset** # Content The folder **files** includes: - **mask** - includes videos of people wearing 2D mask with holes for eyes, - **mask_cut** - includes videos of people wearing 2D mask with holes for eyes, nose, and mouth, - **outline** - includes videos of people wearing 2D mask, - **real** - includes real videos of people ### File with the extension .csv - **file**: link to access the file, - **type**: type of the video (*real, mask, outline, mask_cut*) ## **[TrainingData](https://trainingdata.pro/data-market/on-device-face-liveness-detection?utm_source=huggingface&utm_medium=cpc&utm_campaign=on-device-face-liveness-detection)** provides high-quality data annotation tailored to your needs More datasets in TrainingData's Kaggle account: **<https://www.kaggle.com/trainingdatapro/datasets>** TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets** *keywords: ibeta level 1, ibeta level 2, liveness detection systems, liveness detection dataset, biometric dataset, biometric data dataset, biometric system attacks, anti-spoofing dataset, face liveness detection, deep learning dataset, face spoofing database, face anti-spoofing, face recognition, face detection, face identification, human video dataset, video dataset, presentation attack detection, presentation attack dataset, 2d print attacks, print 2d attacks dataset, printed 2d masks dataset, spoofing in 2D face recognition, facial masks, 2D face recognition systems, detecting face spoofing attacks, detecting presentation attacks, computer vision, surveillance face anti-spoofing, face liveness detection software solution*
TrainingDataPro/on-device-face-liveness-detection
[ "task_categories:video-classification", "task_categories:image-to-image", "task_categories:image-classification", "language:en", "license:cc-by-nc-nd-4.0", "region:us" ]
2023-12-27T13:59:39+00:00
{"language": ["en"], "license": "cc-by-nc-nd-4.0", "task_categories": ["video-classification", "image-to-image", "image-classification"]}
2023-12-27T14:09:17+00:00
[]
[ "en" ]
TAGS #task_categories-video-classification #task_categories-image-to-image #task_categories-image-classification #language-English #license-cc-by-nc-nd-4.0 #region-us
# Mobile Face Liveness Detection The dataset consists of videos featuring individuals wearing various types of masks. Videos are recorded under different lighting conditions and with different attributes (*glasses, masks, hats, hoods, wigs, and mustaches for men*). In the dataset, there are 4 types of videos filmed on mobile devices: - 2D mask with holes for eyes - demonstration of an attack with a paper/cardboard mask (*mask*) - 2D mask with holes for eyes, nose, and mouth - demonstration of an attack with a paper/cardboard mask with cutouts for the nose and mouth (*mask_cut*) - 2D mask - demonstration of an attack with a paper/cardboard silhouette (*outline*) - Real Video - demonstration of a real person's face (*real*) ![](URL The dataset allows researchers and developers in recognizing and analyzing facial expressions, anti-spoofing tasks, face detection, re-identification and face recognition tasks. The inclusion of various attributes and different lighting conditions aims to enhance the robustness and effectiveness of anti-spoofing models in real-world scenarios. ## Full version of the dataset includes 7,200+ videos of people, leave a request on TrainingData to buy the dataset ### Statistics for the dataset (gender, type of the device, type of the attack): ![](URL # Get the Dataset ## This is just an example of the data Leave a request on URL to learn about the price and buy the dataset # Content The folder files includes: - mask - includes videos of people wearing 2D mask with holes for eyes, - mask_cut - includes videos of people wearing 2D mask with holes for eyes, nose, and mouth, - outline - includes videos of people wearing 2D mask, - real - includes real videos of people ### File with the extension .csv - file: link to access the file, - type: type of the video (*real, mask, outline, mask_cut*) ## TrainingData provides high-quality data annotation tailored to your needs More datasets in TrainingData's Kaggle account: <URL TrainingData's GitHub: URL *keywords: ibeta level 1, ibeta level 2, liveness detection systems, liveness detection dataset, biometric dataset, biometric data dataset, biometric system attacks, anti-spoofing dataset, face liveness detection, deep learning dataset, face spoofing database, face anti-spoofing, face recognition, face detection, face identification, human video dataset, video dataset, presentation attack detection, presentation attack dataset, 2d print attacks, print 2d attacks dataset, printed 2d masks dataset, spoofing in 2D face recognition, facial masks, 2D face recognition systems, detecting face spoofing attacks, detecting presentation attacks, computer vision, surveillance face anti-spoofing, face liveness detection software solution*
[ "# Mobile Face Liveness Detection\n\nThe dataset consists of videos featuring individuals wearing various types of masks. Videos are recorded under different lighting conditions and with different attributes (*glasses, masks, hats, hoods, wigs, and mustaches for men*).\n\nIn the dataset, there are 4 types of videos filmed on mobile devices:\n\n- 2D mask with holes for eyes - demonstration of an attack with a paper/cardboard mask (*mask*)\n- 2D mask with holes for eyes, nose, and mouth - demonstration of an attack with a paper/cardboard mask with cutouts for the nose and mouth (*mask_cut*)\n- 2D mask - demonstration of an attack with a paper/cardboard silhouette (*outline*)\n- Real Video - demonstration of a real person's face (*real*)\n\n![](URL\n\nThe dataset allows researchers and developers in recognizing and analyzing facial expressions, anti-spoofing tasks, face detection, re-identification and face recognition tasks. The inclusion of various attributes and different lighting conditions aims to enhance the robustness and effectiveness of anti-spoofing models in real-world scenarios.", "## Full version of the dataset includes 7,200+ videos of people, leave a request on TrainingData to buy the dataset", "### Statistics for the dataset (gender, type of the device, type of the attack):\n\n![](URL", "# Get the Dataset", "## This is just an example of the data \nLeave a request on URL to learn about the price and buy the dataset", "# Content\nThe folder files includes:\n- mask - includes videos of people wearing 2D mask with holes for eyes,\n- mask_cut - includes videos of people wearing 2D mask with holes for eyes, nose, and mouth,\n- outline - includes videos of people wearing 2D mask,\n- real - includes real videos of people", "### File with the extension .csv\n- file: link to access the file,\n- type: type of the video (*real, mask, outline, mask_cut*)", "## TrainingData provides high-quality data annotation tailored to your needs\n\nMore datasets in TrainingData's Kaggle account: <URL\n\nTrainingData's GitHub: URL\n\n*keywords: ibeta level 1, ibeta level 2, liveness detection systems, liveness detection dataset, biometric dataset, biometric data dataset, biometric system attacks, anti-spoofing dataset, face liveness detection, deep learning dataset, face spoofing database, face anti-spoofing, face recognition, face detection, face identification, human video dataset, video dataset, presentation attack detection, presentation attack dataset, 2d print attacks, print 2d attacks dataset, printed 2d masks dataset, spoofing in 2D face recognition, facial masks, 2D face recognition systems, detecting face spoofing attacks, detecting presentation attacks, computer vision, surveillance face anti-spoofing, face liveness detection software solution*" ]
[ "TAGS\n#task_categories-video-classification #task_categories-image-to-image #task_categories-image-classification #language-English #license-cc-by-nc-nd-4.0 #region-us \n", "# Mobile Face Liveness Detection\n\nThe dataset consists of videos featuring individuals wearing various types of masks. Videos are recorded under different lighting conditions and with different attributes (*glasses, masks, hats, hoods, wigs, and mustaches for men*).\n\nIn the dataset, there are 4 types of videos filmed on mobile devices:\n\n- 2D mask with holes for eyes - demonstration of an attack with a paper/cardboard mask (*mask*)\n- 2D mask with holes for eyes, nose, and mouth - demonstration of an attack with a paper/cardboard mask with cutouts for the nose and mouth (*mask_cut*)\n- 2D mask - demonstration of an attack with a paper/cardboard silhouette (*outline*)\n- Real Video - demonstration of a real person's face (*real*)\n\n![](URL\n\nThe dataset allows researchers and developers in recognizing and analyzing facial expressions, anti-spoofing tasks, face detection, re-identification and face recognition tasks. The inclusion of various attributes and different lighting conditions aims to enhance the robustness and effectiveness of anti-spoofing models in real-world scenarios.", "## Full version of the dataset includes 7,200+ videos of people, leave a request on TrainingData to buy the dataset", "### Statistics for the dataset (gender, type of the device, type of the attack):\n\n![](URL", "# Get the Dataset", "## This is just an example of the data \nLeave a request on URL to learn about the price and buy the dataset", "# Content\nThe folder files includes:\n- mask - includes videos of people wearing 2D mask with holes for eyes,\n- mask_cut - includes videos of people wearing 2D mask with holes for eyes, nose, and mouth,\n- outline - includes videos of people wearing 2D mask,\n- real - includes real videos of people", "### File with the extension .csv\n- file: link to access the file,\n- type: type of the video (*real, mask, outline, mask_cut*)", "## TrainingData provides high-quality data annotation tailored to your needs\n\nMore datasets in TrainingData's Kaggle account: <URL\n\nTrainingData's GitHub: URL\n\n*keywords: ibeta level 1, ibeta level 2, liveness detection systems, liveness detection dataset, biometric dataset, biometric data dataset, biometric system attacks, anti-spoofing dataset, face liveness detection, deep learning dataset, face spoofing database, face anti-spoofing, face recognition, face detection, face identification, human video dataset, video dataset, presentation attack detection, presentation attack dataset, 2d print attacks, print 2d attacks dataset, printed 2d masks dataset, spoofing in 2D face recognition, facial masks, 2D face recognition systems, detecting face spoofing attacks, detecting presentation attacks, computer vision, surveillance face anti-spoofing, face liveness detection software solution*" ]
[ 57, 270, 27, 27, 5, 24, 70, 38, 226 ]
[ "passage: TAGS\n#task_categories-video-classification #task_categories-image-to-image #task_categories-image-classification #language-English #license-cc-by-nc-nd-4.0 #region-us \n# Mobile Face Liveness Detection\n\nThe dataset consists of videos featuring individuals wearing various types of masks. Videos are recorded under different lighting conditions and with different attributes (*glasses, masks, hats, hoods, wigs, and mustaches for men*).\n\nIn the dataset, there are 4 types of videos filmed on mobile devices:\n\n- 2D mask with holes for eyes - demonstration of an attack with a paper/cardboard mask (*mask*)\n- 2D mask with holes for eyes, nose, and mouth - demonstration of an attack with a paper/cardboard mask with cutouts for the nose and mouth (*mask_cut*)\n- 2D mask - demonstration of an attack with a paper/cardboard silhouette (*outline*)\n- Real Video - demonstration of a real person's face (*real*)\n\n![](URL\n\nThe dataset allows researchers and developers in recognizing and analyzing facial expressions, anti-spoofing tasks, face detection, re-identification and face recognition tasks. The inclusion of various attributes and different lighting conditions aims to enhance the robustness and effectiveness of anti-spoofing models in real-world scenarios.## Full version of the dataset includes 7,200+ videos of people, leave a request on TrainingData to buy the dataset### Statistics for the dataset (gender, type of the device, type of the attack):\n\n![](URL# Get the Dataset## This is just an example of the data \nLeave a request on URL to learn about the price and buy the dataset# Content\nThe folder files includes:\n- mask - includes videos of people wearing 2D mask with holes for eyes,\n- mask_cut - includes videos of people wearing 2D mask with holes for eyes, nose, and mouth,\n- outline - includes videos of people wearing 2D mask,\n- real - includes real videos of people" ]
bd0dde233f28d0b781510fe97d80303becd2daa3
# Dataset Card for Evaluation run of Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Weyaxi__OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-27T14:05:49.028580](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp/blob/main/results_2023-12-27T14-05-49.028580.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.658157995796316, "acc_stderr": 0.03182120580439916, "acc_norm": 0.658605998345513, "acc_norm_stderr": 0.03247049308981452, "mc1": 0.3769889840881273, "mc1_stderr": 0.016965517578930354, "mc2": 0.5645055633835727, "mc2_stderr": 0.015535132786849723 }, "harness|arc:challenge|25": { "acc": 0.6382252559726962, "acc_stderr": 0.014041957945038078, "acc_norm": 0.6791808873720137, "acc_norm_stderr": 0.013640943091946531 }, "harness|hellaswag|10": { "acc": 0.6796454889464251, "acc_stderr": 0.004656591678606762, "acc_norm": 0.8631746664011153, "acc_norm_stderr": 0.003429605106216371 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.34, "acc_stderr": 0.047609522856952365, "acc_norm": 0.34, "acc_norm_stderr": 0.047609522856952365 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.5851851851851851, "acc_stderr": 0.04256193767901408, "acc_norm": 0.5851851851851851, "acc_norm_stderr": 0.04256193767901408 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.7039473684210527, "acc_stderr": 0.03715062154998904, "acc_norm": 0.7039473684210527, "acc_norm_stderr": 0.03715062154998904 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.65, "acc_stderr": 0.0479372485441102, "acc_norm": 0.65, "acc_norm_stderr": 0.0479372485441102 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.690566037735849, "acc_stderr": 0.028450154794118637, "acc_norm": 0.690566037735849, "acc_norm_stderr": 0.028450154794118637 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7777777777777778, "acc_stderr": 0.03476590104304134, "acc_norm": 0.7777777777777778, "acc_norm_stderr": 0.03476590104304134 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.51, "acc_stderr": 0.05024183937956912, "acc_norm": 0.51, "acc_norm_stderr": 0.05024183937956912 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.58, "acc_stderr": 0.049604496374885836, "acc_norm": 0.58, "acc_norm_stderr": 0.049604496374885836 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6820809248554913, "acc_stderr": 0.0355068398916558, "acc_norm": 0.6820809248554913, "acc_norm_stderr": 0.0355068398916558 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.45098039215686275, "acc_stderr": 0.049512182523962625, "acc_norm": 0.45098039215686275, "acc_norm_stderr": 0.049512182523962625 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.73, "acc_stderr": 0.044619604333847394, "acc_norm": 0.73, "acc_norm_stderr": 0.044619604333847394 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5914893617021276, "acc_stderr": 0.032134180267015755, "acc_norm": 0.5914893617021276, "acc_norm_stderr": 0.032134180267015755 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.5, "acc_stderr": 0.047036043419179864, "acc_norm": 0.5, "acc_norm_stderr": 0.047036043419179864 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5655172413793104, "acc_stderr": 0.04130740879555497, "acc_norm": 0.5655172413793104, "acc_norm_stderr": 0.04130740879555497 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.41005291005291006, "acc_stderr": 0.025331202438944444, "acc_norm": 0.41005291005291006, "acc_norm_stderr": 0.025331202438944444 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.5, "acc_stderr": 0.04472135954999579, "acc_norm": 0.5, "acc_norm_stderr": 0.04472135954999579 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.39, "acc_stderr": 0.04902071300001974, "acc_norm": 0.39, "acc_norm_stderr": 0.04902071300001974 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.8064516129032258, "acc_stderr": 0.022475258525536057, "acc_norm": 0.8064516129032258, "acc_norm_stderr": 0.022475258525536057 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.4876847290640394, "acc_stderr": 0.035169204442208966, "acc_norm": 0.4876847290640394, "acc_norm_stderr": 0.035169204442208966 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.73, "acc_stderr": 0.044619604333847394, "acc_norm": 0.73, "acc_norm_stderr": 0.044619604333847394 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7575757575757576, "acc_stderr": 0.03346409881055953, "acc_norm": 0.7575757575757576, "acc_norm_stderr": 0.03346409881055953 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7828282828282829, "acc_stderr": 0.029376616484945633, "acc_norm": 0.7828282828282829, "acc_norm_stderr": 0.029376616484945633 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.9119170984455959, "acc_stderr": 0.02045374660160103, "acc_norm": 0.9119170984455959, "acc_norm_stderr": 0.02045374660160103 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6794871794871795, "acc_stderr": 0.02366129639396428, "acc_norm": 0.6794871794871795, "acc_norm_stderr": 0.02366129639396428 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.35555555555555557, "acc_stderr": 0.02918571494985741, "acc_norm": 0.35555555555555557, "acc_norm_stderr": 0.02918571494985741 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6890756302521008, "acc_stderr": 0.030066761582977938, "acc_norm": 0.6890756302521008, "acc_norm_stderr": 0.030066761582977938 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3576158940397351, "acc_stderr": 0.03913453431177258, "acc_norm": 0.3576158940397351, "acc_norm_stderr": 0.03913453431177258 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8568807339449541, "acc_stderr": 0.01501446249716859, "acc_norm": 0.8568807339449541, "acc_norm_stderr": 0.01501446249716859 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5138888888888888, "acc_stderr": 0.034086558679777494, "acc_norm": 0.5138888888888888, "acc_norm_stderr": 0.034086558679777494 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8333333333333334, "acc_stderr": 0.026156867523931045, "acc_norm": 0.8333333333333334, "acc_norm_stderr": 0.026156867523931045 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.8143459915611815, "acc_stderr": 0.02531049537694486, "acc_norm": 0.8143459915611815, "acc_norm_stderr": 0.02531049537694486 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.7130044843049327, "acc_stderr": 0.030360379710291947, "acc_norm": 0.7130044843049327, "acc_norm_stderr": 0.030360379710291947 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.8015267175572519, "acc_stderr": 0.03498149385462472, "acc_norm": 0.8015267175572519, "acc_norm_stderr": 0.03498149385462472 }, "harness|hendrycksTest-international_law|5": { "acc": 0.8016528925619835, "acc_stderr": 0.03640118271990946, "acc_norm": 0.8016528925619835, "acc_norm_stderr": 0.03640118271990946 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.8240740740740741, "acc_stderr": 0.036809181416738807, "acc_norm": 0.8240740740740741, "acc_norm_stderr": 0.036809181416738807 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.803680981595092, "acc_stderr": 0.031207970394709218, "acc_norm": 0.803680981595092, "acc_norm_stderr": 0.031207970394709218 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.45535714285714285, "acc_stderr": 0.047268355537191, "acc_norm": 0.45535714285714285, "acc_norm_stderr": 0.047268355537191 }, "harness|hendrycksTest-management|5": { "acc": 0.8640776699029126, "acc_stderr": 0.033932957297610096, "acc_norm": 0.8640776699029126, "acc_norm_stderr": 0.033932957297610096 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8632478632478633, "acc_stderr": 0.02250903393707781, "acc_norm": 0.8632478632478633, "acc_norm_stderr": 0.02250903393707781 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.72, "acc_stderr": 0.04512608598542128, "acc_norm": 0.72, "acc_norm_stderr": 0.04512608598542128 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8378033205619413, "acc_stderr": 0.013182222616720885, "acc_norm": 0.8378033205619413, "acc_norm_stderr": 0.013182222616720885 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7167630057803468, "acc_stderr": 0.024257901705323378, "acc_norm": 0.7167630057803468, "acc_norm_stderr": 0.024257901705323378 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.376536312849162, "acc_stderr": 0.0162046723851066, "acc_norm": 0.376536312849162, "acc_norm_stderr": 0.0162046723851066 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.738562091503268, "acc_stderr": 0.025160998214292456, "acc_norm": 0.738562091503268, "acc_norm_stderr": 0.025160998214292456 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7202572347266881, "acc_stderr": 0.025494259350694912, "acc_norm": 0.7202572347266881, "acc_norm_stderr": 0.025494259350694912 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7345679012345679, "acc_stderr": 0.024569223600460845, "acc_norm": 0.7345679012345679, "acc_norm_stderr": 0.024569223600460845 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.48226950354609927, "acc_stderr": 0.02980873964223777, "acc_norm": 0.48226950354609927, "acc_norm_stderr": 0.02980873964223777 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4758800521512386, "acc_stderr": 0.012755368722863937, "acc_norm": 0.4758800521512386, "acc_norm_stderr": 0.012755368722863937 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.7169117647058824, "acc_stderr": 0.027365861131513812, "acc_norm": 0.7169117647058824, "acc_norm_stderr": 0.027365861131513812 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6781045751633987, "acc_stderr": 0.01890101532209309, "acc_norm": 0.6781045751633987, "acc_norm_stderr": 0.01890101532209309 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6636363636363637, "acc_stderr": 0.04525393596302506, "acc_norm": 0.6636363636363637, "acc_norm_stderr": 0.04525393596302506 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7061224489795919, "acc_stderr": 0.029162738410249772, "acc_norm": 0.7061224489795919, "acc_norm_stderr": 0.029162738410249772 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8507462686567164, "acc_stderr": 0.02519692987482706, "acc_norm": 0.8507462686567164, "acc_norm_stderr": 0.02519692987482706 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.87, "acc_stderr": 0.033799766898963086, "acc_norm": 0.87, "acc_norm_stderr": 0.033799766898963086 }, "harness|hendrycksTest-virology|5": { "acc": 0.5301204819277109, "acc_stderr": 0.03885425420866767, "acc_norm": 0.5301204819277109, "acc_norm_stderr": 0.03885425420866767 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8362573099415205, "acc_stderr": 0.028380919596145866, "acc_norm": 0.8362573099415205, "acc_norm_stderr": 0.028380919596145866 }, "harness|truthfulqa:mc|0": { "mc1": 0.3769889840881273, "mc1_stderr": 0.016965517578930354, "mc2": 0.5645055633835727, "mc2_stderr": 0.015535132786849723 }, "harness|winogrande|5": { "acc": 0.7971586424625099, "acc_stderr": 0.011301439925936657 }, "harness|gsm8k|5": { "acc": 0.7172100075815011, "acc_stderr": 0.012405020417873619 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_Weyaxi__OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp
[ "region:us" ]
2023-12-27T14:08:04+00:00
{"pretty_name": "Evaluation run of Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp", "dataset_summary": "Dataset automatically created during the evaluation run of model [Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp](https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Weyaxi__OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-27T14:05:49.028580](https://huggingface.co/datasets/open-llm-leaderboard/details_Weyaxi__OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp/blob/main/results_2023-12-27T14-05-49.028580.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.658157995796316,\n \"acc_stderr\": 0.03182120580439916,\n \"acc_norm\": 0.658605998345513,\n \"acc_norm_stderr\": 0.03247049308981452,\n \"mc1\": 0.3769889840881273,\n \"mc1_stderr\": 0.016965517578930354,\n \"mc2\": 0.5645055633835727,\n \"mc2_stderr\": 0.015535132786849723\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6382252559726962,\n \"acc_stderr\": 0.014041957945038078,\n \"acc_norm\": 0.6791808873720137,\n \"acc_norm_stderr\": 0.013640943091946531\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6796454889464251,\n \"acc_stderr\": 0.004656591678606762,\n \"acc_norm\": 0.8631746664011153,\n \"acc_norm_stderr\": 0.003429605106216371\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.047609522856952365,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.047609522856952365\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5851851851851851,\n \"acc_stderr\": 0.04256193767901408,\n \"acc_norm\": 0.5851851851851851,\n \"acc_norm_stderr\": 0.04256193767901408\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7039473684210527,\n \"acc_stderr\": 0.03715062154998904,\n \"acc_norm\": 0.7039473684210527,\n \"acc_norm_stderr\": 0.03715062154998904\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.65,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.690566037735849,\n \"acc_stderr\": 0.028450154794118637,\n \"acc_norm\": 0.690566037735849,\n \"acc_norm_stderr\": 0.028450154794118637\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.03476590104304134,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.03476590104304134\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6820809248554913,\n \"acc_stderr\": 0.0355068398916558,\n \"acc_norm\": 0.6820809248554913,\n \"acc_norm_stderr\": 0.0355068398916558\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.45098039215686275,\n \"acc_stderr\": 0.049512182523962625,\n \"acc_norm\": 0.45098039215686275,\n \"acc_norm_stderr\": 0.049512182523962625\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5914893617021276,\n \"acc_stderr\": 0.032134180267015755,\n \"acc_norm\": 0.5914893617021276,\n \"acc_norm_stderr\": 0.032134180267015755\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.047036043419179864,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.047036043419179864\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5655172413793104,\n \"acc_stderr\": 0.04130740879555497,\n \"acc_norm\": 0.5655172413793104,\n \"acc_norm_stderr\": 0.04130740879555497\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.41005291005291006,\n \"acc_stderr\": 0.025331202438944444,\n \"acc_norm\": 0.41005291005291006,\n \"acc_norm_stderr\": 0.025331202438944444\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.04472135954999579,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.04472135954999579\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001974,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001974\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8064516129032258,\n \"acc_stderr\": 0.022475258525536057,\n \"acc_norm\": 0.8064516129032258,\n \"acc_norm_stderr\": 0.022475258525536057\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.4876847290640394,\n \"acc_stderr\": 0.035169204442208966,\n \"acc_norm\": 0.4876847290640394,\n \"acc_norm_stderr\": 0.035169204442208966\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.73,\n \"acc_stderr\": 0.044619604333847394,\n \"acc_norm\": 0.73,\n \"acc_norm_stderr\": 0.044619604333847394\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7575757575757576,\n \"acc_stderr\": 0.03346409881055953,\n \"acc_norm\": 0.7575757575757576,\n \"acc_norm_stderr\": 0.03346409881055953\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7828282828282829,\n \"acc_stderr\": 0.029376616484945633,\n \"acc_norm\": 0.7828282828282829,\n \"acc_norm_stderr\": 0.029376616484945633\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9119170984455959,\n \"acc_stderr\": 0.02045374660160103,\n \"acc_norm\": 0.9119170984455959,\n \"acc_norm_stderr\": 0.02045374660160103\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6794871794871795,\n \"acc_stderr\": 0.02366129639396428,\n \"acc_norm\": 0.6794871794871795,\n \"acc_norm_stderr\": 0.02366129639396428\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.35555555555555557,\n \"acc_stderr\": 0.02918571494985741,\n \"acc_norm\": 0.35555555555555557,\n \"acc_norm_stderr\": 0.02918571494985741\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6890756302521008,\n \"acc_stderr\": 0.030066761582977938,\n \"acc_norm\": 0.6890756302521008,\n \"acc_norm_stderr\": 0.030066761582977938\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3576158940397351,\n \"acc_stderr\": 0.03913453431177258,\n \"acc_norm\": 0.3576158940397351,\n \"acc_norm_stderr\": 0.03913453431177258\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8568807339449541,\n \"acc_stderr\": 0.01501446249716859,\n \"acc_norm\": 0.8568807339449541,\n \"acc_norm_stderr\": 0.01501446249716859\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5138888888888888,\n \"acc_stderr\": 0.034086558679777494,\n \"acc_norm\": 0.5138888888888888,\n \"acc_norm_stderr\": 0.034086558679777494\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8333333333333334,\n \"acc_stderr\": 0.026156867523931045,\n \"acc_norm\": 0.8333333333333334,\n \"acc_norm_stderr\": 0.026156867523931045\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8143459915611815,\n \"acc_stderr\": 0.02531049537694486,\n \"acc_norm\": 0.8143459915611815,\n \"acc_norm_stderr\": 0.02531049537694486\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.7130044843049327,\n \"acc_stderr\": 0.030360379710291947,\n \"acc_norm\": 0.7130044843049327,\n \"acc_norm_stderr\": 0.030360379710291947\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.8015267175572519,\n \"acc_stderr\": 0.03498149385462472,\n \"acc_norm\": 0.8015267175572519,\n \"acc_norm_stderr\": 0.03498149385462472\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8016528925619835,\n \"acc_stderr\": 0.03640118271990946,\n \"acc_norm\": 0.8016528925619835,\n \"acc_norm_stderr\": 0.03640118271990946\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8240740740740741,\n \"acc_stderr\": 0.036809181416738807,\n \"acc_norm\": 0.8240740740740741,\n \"acc_norm_stderr\": 0.036809181416738807\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.803680981595092,\n \"acc_stderr\": 0.031207970394709218,\n \"acc_norm\": 0.803680981595092,\n \"acc_norm_stderr\": 0.031207970394709218\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.45535714285714285,\n \"acc_stderr\": 0.047268355537191,\n \"acc_norm\": 0.45535714285714285,\n \"acc_norm_stderr\": 0.047268355537191\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8640776699029126,\n \"acc_stderr\": 0.033932957297610096,\n \"acc_norm\": 0.8640776699029126,\n \"acc_norm_stderr\": 0.033932957297610096\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8632478632478633,\n \"acc_stderr\": 0.02250903393707781,\n \"acc_norm\": 0.8632478632478633,\n \"acc_norm_stderr\": 0.02250903393707781\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8378033205619413,\n \"acc_stderr\": 0.013182222616720885,\n \"acc_norm\": 0.8378033205619413,\n \"acc_norm_stderr\": 0.013182222616720885\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7167630057803468,\n \"acc_stderr\": 0.024257901705323378,\n \"acc_norm\": 0.7167630057803468,\n \"acc_norm_stderr\": 0.024257901705323378\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.376536312849162,\n \"acc_stderr\": 0.0162046723851066,\n \"acc_norm\": 0.376536312849162,\n \"acc_norm_stderr\": 0.0162046723851066\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.738562091503268,\n \"acc_stderr\": 0.025160998214292456,\n \"acc_norm\": 0.738562091503268,\n \"acc_norm_stderr\": 0.025160998214292456\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7202572347266881,\n \"acc_stderr\": 0.025494259350694912,\n \"acc_norm\": 0.7202572347266881,\n \"acc_norm_stderr\": 0.025494259350694912\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7345679012345679,\n \"acc_stderr\": 0.024569223600460845,\n \"acc_norm\": 0.7345679012345679,\n \"acc_norm_stderr\": 0.024569223600460845\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.48226950354609927,\n \"acc_stderr\": 0.02980873964223777,\n \"acc_norm\": 0.48226950354609927,\n \"acc_norm_stderr\": 0.02980873964223777\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4758800521512386,\n \"acc_stderr\": 0.012755368722863937,\n \"acc_norm\": 0.4758800521512386,\n \"acc_norm_stderr\": 0.012755368722863937\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7169117647058824,\n \"acc_stderr\": 0.027365861131513812,\n \"acc_norm\": 0.7169117647058824,\n \"acc_norm_stderr\": 0.027365861131513812\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6781045751633987,\n \"acc_stderr\": 0.01890101532209309,\n \"acc_norm\": 0.6781045751633987,\n \"acc_norm_stderr\": 0.01890101532209309\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7061224489795919,\n \"acc_stderr\": 0.029162738410249772,\n \"acc_norm\": 0.7061224489795919,\n \"acc_norm_stderr\": 0.029162738410249772\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8507462686567164,\n \"acc_stderr\": 0.02519692987482706,\n \"acc_norm\": 0.8507462686567164,\n \"acc_norm_stderr\": 0.02519692987482706\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.87,\n \"acc_stderr\": 0.033799766898963086,\n \"acc_norm\": 0.87,\n \"acc_norm_stderr\": 0.033799766898963086\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5301204819277109,\n \"acc_stderr\": 0.03885425420866767,\n \"acc_norm\": 0.5301204819277109,\n \"acc_norm_stderr\": 0.03885425420866767\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3769889840881273,\n \"mc1_stderr\": 0.016965517578930354,\n \"mc2\": 0.5645055633835727,\n \"mc2_stderr\": 0.015535132786849723\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7971586424625099,\n \"acc_stderr\": 0.011301439925936657\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.7172100075815011,\n \"acc_stderr\": 0.012405020417873619\n }\n}\n```", "repo_url": "https://huggingface.co/Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|arc:challenge|25_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|gsm8k|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hellaswag|10_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-27T14-05-49.028580.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["**/details_harness|winogrande|5_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-27T14-05-49.028580.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_27T14_05_49.028580", "path": ["results_2023-12-27T14-05-49.028580.parquet"]}, {"split": "latest", "path": ["results_2023-12-27T14-05-49.028580.parquet"]}]}]}
2023-12-27T14:08:25+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp Dataset automatically created during the evaluation run of model Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-27T14:05:49.028580(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp\n\n\n\nDataset automatically created during the evaluation run of model Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-27T14:05:49.028580(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp\n\n\n\nDataset automatically created during the evaluation run of model Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-27T14:05:49.028580(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 217, 67, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp\n\n\n\nDataset automatically created during the evaluation run of model Weyaxi/OpenHermes-2.5-neural-chat-v3-3-openchat-3.5-1210-Slerp on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-27T14:05:49.028580(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:" ]
774edfc6bf92e9ca0ad29aeb5c1eccf4e9968182
# Dataset Card for "paddy-disease-classification" Taken from the Paddy Doctor Kaggle [Competition](https://www.kaggle.com/competitions/paddy-disease-classification/) [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
anthony2261/paddy-disease-classification
[ "task_categories:image-classification", "size_categories:1K<n<10K", "biology", "medical", "region:us" ]
2023-12-27T14:13:38+00:00
{"size_categories": ["1K<n<10K"], "task_categories": ["image-classification"], "pretty_name": "Paddy Disease Classification", "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "bacterial_leaf_blight", "1": "bacterial_leaf_streak", "2": "bacterial_panicle_blight", "3": "blast", "4": "brown_spot", "5": "dead_heart", "6": "downy_mildew", "7": "hispa", "8": "normal", "9": "tungro"}}}}, {"name": "variety", "dtype": {"class_label": {"names": {"0": "ADT45", "1": "IR20", "2": "KarnatakaPonni", "3": "Onthanel", "4": "Ponni", "5": "Surya", "6": "Zonal", "7": "AndraPonni", "8": "AtchayaPonni", "9": "RR"}}}}, {"name": "age", "dtype": "int16"}], "splits": [{"name": "train", "num_bytes": 834127756.552, "num_examples": 10407}], "download_size": 816344863, "dataset_size": 834127756.552}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "tags": ["biology", "medical"]}
2023-12-27T18:40:23+00:00
[]
[]
TAGS #task_categories-image-classification #size_categories-1K<n<10K #biology #medical #region-us
# Dataset Card for "paddy-disease-classification" Taken from the Paddy Doctor Kaggle Competition More Information needed
[ "# Dataset Card for \"paddy-disease-classification\"\n\nTaken from the Paddy Doctor Kaggle Competition\n\nMore Information needed" ]
[ "TAGS\n#task_categories-image-classification #size_categories-1K<n<10K #biology #medical #region-us \n", "# Dataset Card for \"paddy-disease-classification\"\n\nTaken from the Paddy Doctor Kaggle Competition\n\nMore Information needed" ]
[ 35, 29 ]
[ "passage: TAGS\n#task_categories-image-classification #size_categories-1K<n<10K #biology #medical #region-us \n# Dataset Card for \"paddy-disease-classification\"\n\nTaken from the Paddy Doctor Kaggle Competition\n\nMore Information needed" ]
6d5fbd45e94980223650aa9aa061ab4bf3f2fa88
# National Taiwan Normal University Course Catalog Range: 109-1 ~ 112-1 ## Data Each dataset is an array of objects, each object represents a course: ```ts interface Course { serial: number code: string name: string ename: string type: string department: string form: string credit: number duration: string gu_domain: string description: string goals: [goal: string, capability: string][] teacher: string[] schedule: string methods: [method: string, description: string][] evaluations: [weight: number, type: string, description: string][] material: string enrollment: number limit: number schedules: { day: number period: [from: number, to: number] location: string classroom: string }[] programs: string[] comment: string restriction: string } ``` ## Copyright The copyright is owned by National Taiwan Normal University. Source: <https://courseap2.itc.ntnu.edu.tw/acadmOpenCourse/index.jsp>
JacobLinCool/NTNU-Course
[ "language:zh", "region:us" ]
2023-12-27T14:17:51+00:00
{"language": ["zh"]}
2023-12-27T18:37:25+00:00
[]
[ "zh" ]
TAGS #language-Chinese #region-us
# National Taiwan Normal University Course Catalog Range: 109-1 ~ 112-1 ## Data Each dataset is an array of objects, each object represents a course: ## Copyright The copyright is owned by National Taiwan Normal University. Source: <URL
[ "# National Taiwan Normal University Course Catalog\n\nRange: 109-1 ~ 112-1", "## Data\n\nEach dataset is an array of objects, each object represents a course:", "## Copyright\n\nThe copyright is owned by National Taiwan Normal University.\n\nSource: <URL" ]
[ "TAGS\n#language-Chinese #region-us \n", "# National Taiwan Normal University Course Catalog\n\nRange: 109-1 ~ 112-1", "## Data\n\nEach dataset is an array of objects, each object represents a course:", "## Copyright\n\nThe copyright is owned by National Taiwan Normal University.\n\nSource: <URL" ]
[ 11, 14, 20, 17 ]
[ "passage: TAGS\n#language-Chinese #region-us \n# National Taiwan Normal University Course Catalog\n\nRange: 109-1 ~ 112-1## Data\n\nEach dataset is an array of objects, each object represents a course:## Copyright\n\nThe copyright is owned by National Taiwan Normal University.\n\nSource: <URL" ]
9fed8a0dcabfdae8d75e9aec8412b57981122db0
# Dataset Card for Evaluation run of openthaigpt/openthaigpt-1.0.0-beta-13b-chat-hf <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [openthaigpt/openthaigpt-1.0.0-beta-13b-chat-hf](https://huggingface.co/openthaigpt/openthaigpt-1.0.0-beta-13b-chat-hf) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_openthaigpt__openthaigpt-1.0.0-beta-13b-chat-hf", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-29T13:17:24.378047](https://huggingface.co/datasets/open-llm-leaderboard/details_openthaigpt__openthaigpt-1.0.0-beta-13b-chat-hf/blob/main/results_2023-12-29T13-17-24.378047.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.5079906839832642, "acc_stderr": 0.03424315613001413, "acc_norm": 0.516456403503363, "acc_norm_stderr": 0.03513746029855274, "mc1": 0.2802937576499388, "mc1_stderr": 0.015723139524608763, "mc2": 0.4416133249202012, "mc2_stderr": 0.01548425276508773 }, "harness|arc:challenge|25": { "acc": 0.4974402730375427, "acc_stderr": 0.014611199329843784, "acc_norm": 0.5358361774744027, "acc_norm_stderr": 0.01457381366473572 }, "harness|hellaswag|10": { "acc": 0.5986855208125871, "acc_stderr": 0.004891626718097025, "acc_norm": 0.7908783110934077, "acc_norm_stderr": 0.0040585031572305955 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.33, "acc_stderr": 0.04725815626252605, "acc_norm": 0.33, "acc_norm_stderr": 0.04725815626252605 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.45185185185185184, "acc_stderr": 0.04299268905480864, "acc_norm": 0.45185185185185184, "acc_norm_stderr": 0.04299268905480864 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.45394736842105265, "acc_stderr": 0.04051646342874143, "acc_norm": 0.45394736842105265, "acc_norm_stderr": 0.04051646342874143 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.52, "acc_stderr": 0.050211673156867795, "acc_norm": 0.52, "acc_norm_stderr": 0.050211673156867795 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.5320754716981132, "acc_stderr": 0.03070948699255655, "acc_norm": 0.5320754716981132, "acc_norm_stderr": 0.03070948699255655 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.5694444444444444, "acc_stderr": 0.04140685639111503, "acc_norm": 0.5694444444444444, "acc_norm_stderr": 0.04140685639111503 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.42, "acc_stderr": 0.049604496374885836, "acc_norm": 0.42, "acc_norm_stderr": 0.049604496374885836 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.43, "acc_stderr": 0.04975698519562428, "acc_norm": 0.43, "acc_norm_stderr": 0.04975698519562428 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.36, "acc_stderr": 0.04824181513244218, "acc_norm": 0.36, "acc_norm_stderr": 0.04824181513244218 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.49710982658959535, "acc_stderr": 0.038124005659748335, "acc_norm": 0.49710982658959535, "acc_norm_stderr": 0.038124005659748335 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.3235294117647059, "acc_stderr": 0.046550104113196177, "acc_norm": 0.3235294117647059, "acc_norm_stderr": 0.046550104113196177 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.64, "acc_stderr": 0.048241815132442176, "acc_norm": 0.64, "acc_norm_stderr": 0.048241815132442176 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.4425531914893617, "acc_stderr": 0.03246956919789958, "acc_norm": 0.4425531914893617, "acc_norm_stderr": 0.03246956919789958 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.30701754385964913, "acc_stderr": 0.0433913832257986, "acc_norm": 0.30701754385964913, "acc_norm_stderr": 0.0433913832257986 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.46206896551724136, "acc_stderr": 0.041546596717075474, "acc_norm": 0.46206896551724136, "acc_norm_stderr": 0.041546596717075474 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.30687830687830686, "acc_stderr": 0.023752928712112143, "acc_norm": 0.30687830687830686, "acc_norm_stderr": 0.023752928712112143 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.29365079365079366, "acc_stderr": 0.040735243221471255, "acc_norm": 0.29365079365079366, "acc_norm_stderr": 0.040735243221471255 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.31, "acc_stderr": 0.04648231987117316, "acc_norm": 0.31, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.6, "acc_stderr": 0.027869320571664632, "acc_norm": 0.6, "acc_norm_stderr": 0.027869320571664632 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.4187192118226601, "acc_stderr": 0.034711928605184676, "acc_norm": 0.4187192118226601, "acc_norm_stderr": 0.034711928605184676 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.5, "acc_stderr": 0.050251890762960605, "acc_norm": 0.5, "acc_norm_stderr": 0.050251890762960605 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.6363636363636364, "acc_stderr": 0.03756335775187897, "acc_norm": 0.6363636363636364, "acc_norm_stderr": 0.03756335775187897 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.6464646464646465, "acc_stderr": 0.03406086723547155, "acc_norm": 0.6464646464646465, "acc_norm_stderr": 0.03406086723547155 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.7046632124352331, "acc_stderr": 0.032922966391551414, "acc_norm": 0.7046632124352331, "acc_norm_stderr": 0.032922966391551414 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.4794871794871795, "acc_stderr": 0.025329663163489943, "acc_norm": 0.4794871794871795, "acc_norm_stderr": 0.025329663163489943 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.2851851851851852, "acc_stderr": 0.027528599210340496, "acc_norm": 0.2851851851851852, "acc_norm_stderr": 0.027528599210340496 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.5042016806722689, "acc_stderr": 0.03247734334448111, "acc_norm": 0.5042016806722689, "acc_norm_stderr": 0.03247734334448111 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.31125827814569534, "acc_stderr": 0.03780445850526733, "acc_norm": 0.31125827814569534, "acc_norm_stderr": 0.03780445850526733 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.6642201834862386, "acc_stderr": 0.020248081396752923, "acc_norm": 0.6642201834862386, "acc_norm_stderr": 0.020248081396752923 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.33796296296296297, "acc_stderr": 0.03225941352631295, "acc_norm": 0.33796296296296297, "acc_norm_stderr": 0.03225941352631295 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.6470588235294118, "acc_stderr": 0.03354092437591519, "acc_norm": 0.6470588235294118, "acc_norm_stderr": 0.03354092437591519 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.679324894514768, "acc_stderr": 0.030381931949990407, "acc_norm": 0.679324894514768, "acc_norm_stderr": 0.030381931949990407 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6098654708520179, "acc_stderr": 0.03273766725459157, "acc_norm": 0.6098654708520179, "acc_norm_stderr": 0.03273766725459157 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.5725190839694656, "acc_stderr": 0.043389203057924, "acc_norm": 0.5725190839694656, "acc_norm_stderr": 0.043389203057924 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7024793388429752, "acc_stderr": 0.04173349148083499, "acc_norm": 0.7024793388429752, "acc_norm_stderr": 0.04173349148083499 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.6203703703703703, "acc_stderr": 0.04691521224077742, "acc_norm": 0.6203703703703703, "acc_norm_stderr": 0.04691521224077742 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.5766871165644172, "acc_stderr": 0.038818912133343826, "acc_norm": 0.5766871165644172, "acc_norm_stderr": 0.038818912133343826 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.33035714285714285, "acc_stderr": 0.04464285714285714, "acc_norm": 0.33035714285714285, "acc_norm_stderr": 0.04464285714285714 }, "harness|hendrycksTest-management|5": { "acc": 0.6893203883495146, "acc_stderr": 0.04582124160161549, "acc_norm": 0.6893203883495146, "acc_norm_stderr": 0.04582124160161549 }, "harness|hendrycksTest-marketing|5": { "acc": 0.7564102564102564, "acc_stderr": 0.028120966503914407, "acc_norm": 0.7564102564102564, "acc_norm_stderr": 0.028120966503914407 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.46, "acc_stderr": 0.05009082659620333, "acc_norm": 0.46, "acc_norm_stderr": 0.05009082659620333 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.6896551724137931, "acc_stderr": 0.016543785026048304, "acc_norm": 0.6896551724137931, "acc_norm_stderr": 0.016543785026048304 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.6011560693641619, "acc_stderr": 0.026362437574546545, "acc_norm": 0.6011560693641619, "acc_norm_stderr": 0.026362437574546545 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.3687150837988827, "acc_stderr": 0.016135759015030122, "acc_norm": 0.3687150837988827, "acc_norm_stderr": 0.016135759015030122 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.6013071895424836, "acc_stderr": 0.028036092273891765, "acc_norm": 0.6013071895424836, "acc_norm_stderr": 0.028036092273891765 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.5980707395498392, "acc_stderr": 0.027846476005930473, "acc_norm": 0.5980707395498392, "acc_norm_stderr": 0.027846476005930473 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.5709876543209876, "acc_stderr": 0.027538925613470863, "acc_norm": 0.5709876543209876, "acc_norm_stderr": 0.027538925613470863 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.36879432624113473, "acc_stderr": 0.028782227561347237, "acc_norm": 0.36879432624113473, "acc_norm_stderr": 0.028782227561347237 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.3878748370273794, "acc_stderr": 0.012444998309675617, "acc_norm": 0.3878748370273794, "acc_norm_stderr": 0.012444998309675617 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.3897058823529412, "acc_stderr": 0.0296246635811597, "acc_norm": 0.3897058823529412, "acc_norm_stderr": 0.0296246635811597 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.4820261437908497, "acc_stderr": 0.020214761037872408, "acc_norm": 0.4820261437908497, "acc_norm_stderr": 0.020214761037872408 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6, "acc_stderr": 0.0469237132203465, "acc_norm": 0.6, "acc_norm_stderr": 0.0469237132203465 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.5877551020408164, "acc_stderr": 0.03151236044674268, "acc_norm": 0.5877551020408164, "acc_norm_stderr": 0.03151236044674268 }, "harness|hendrycksTest-sociology|5": { "acc": 0.6467661691542289, "acc_stderr": 0.03379790611796777, "acc_norm": 0.6467661691542289, "acc_norm_stderr": 0.03379790611796777 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.71, "acc_stderr": 0.045604802157206845, "acc_norm": 0.71, "acc_norm_stderr": 0.045604802157206845 }, "harness|hendrycksTest-virology|5": { "acc": 0.41566265060240964, "acc_stderr": 0.03836722176598052, "acc_norm": 0.41566265060240964, "acc_norm_stderr": 0.03836722176598052 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.7426900584795322, "acc_stderr": 0.03352799844161865, "acc_norm": 0.7426900584795322, "acc_norm_stderr": 0.03352799844161865 }, "harness|truthfulqa:mc|0": { "mc1": 0.2802937576499388, "mc1_stderr": 0.015723139524608763, "mc2": 0.4416133249202012, "mc2_stderr": 0.01548425276508773 }, "harness|winogrande|5": { "acc": 0.7387529597474349, "acc_stderr": 0.01234691486341531 }, "harness|gsm8k|5": { "acc": 0.008339651250947688, "acc_stderr": 0.002504942226860527 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_openthaigpt__openthaigpt-1.0.0-beta-13b-chat-hf
[ "region:us" ]
2023-12-27T14:28:32+00:00
{"pretty_name": "Evaluation run of openthaigpt/openthaigpt-1.0.0-beta-13b-chat-hf", "dataset_summary": "Dataset automatically created during the evaluation run of model [openthaigpt/openthaigpt-1.0.0-beta-13b-chat-hf](https://huggingface.co/openthaigpt/openthaigpt-1.0.0-beta-13b-chat-hf) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_openthaigpt__openthaigpt-1.0.0-beta-13b-chat-hf\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-29T13:17:24.378047](https://huggingface.co/datasets/open-llm-leaderboard/details_openthaigpt__openthaigpt-1.0.0-beta-13b-chat-hf/blob/main/results_2023-12-29T13-17-24.378047.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5079906839832642,\n \"acc_stderr\": 0.03424315613001413,\n \"acc_norm\": 0.516456403503363,\n \"acc_norm_stderr\": 0.03513746029855274,\n \"mc1\": 0.2802937576499388,\n \"mc1_stderr\": 0.015723139524608763,\n \"mc2\": 0.4416133249202012,\n \"mc2_stderr\": 0.01548425276508773\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.4974402730375427,\n \"acc_stderr\": 0.014611199329843784,\n \"acc_norm\": 0.5358361774744027,\n \"acc_norm_stderr\": 0.01457381366473572\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5986855208125871,\n \"acc_stderr\": 0.004891626718097025,\n \"acc_norm\": 0.7908783110934077,\n \"acc_norm_stderr\": 0.0040585031572305955\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252605,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252605\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.45185185185185184,\n \"acc_stderr\": 0.04299268905480864,\n \"acc_norm\": 0.45185185185185184,\n \"acc_norm_stderr\": 0.04299268905480864\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.45394736842105265,\n \"acc_stderr\": 0.04051646342874143,\n \"acc_norm\": 0.45394736842105265,\n \"acc_norm_stderr\": 0.04051646342874143\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.5320754716981132,\n \"acc_stderr\": 0.03070948699255655,\n \"acc_norm\": 0.5320754716981132,\n \"acc_norm_stderr\": 0.03070948699255655\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5694444444444444,\n \"acc_stderr\": 0.04140685639111503,\n \"acc_norm\": 0.5694444444444444,\n \"acc_norm_stderr\": 0.04140685639111503\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.43,\n \"acc_stderr\": 0.04975698519562428,\n \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.04975698519562428\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.49710982658959535,\n \"acc_stderr\": 0.038124005659748335,\n \"acc_norm\": 0.49710982658959535,\n \"acc_norm_stderr\": 0.038124005659748335\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.3235294117647059,\n \"acc_stderr\": 0.046550104113196177,\n \"acc_norm\": 0.3235294117647059,\n \"acc_norm_stderr\": 0.046550104113196177\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.64,\n \"acc_stderr\": 0.048241815132442176,\n \"acc_norm\": 0.64,\n \"acc_norm_stderr\": 0.048241815132442176\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.4425531914893617,\n \"acc_stderr\": 0.03246956919789958,\n \"acc_norm\": 0.4425531914893617,\n \"acc_norm_stderr\": 0.03246956919789958\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.30701754385964913,\n \"acc_stderr\": 0.0433913832257986,\n \"acc_norm\": 0.30701754385964913,\n \"acc_norm_stderr\": 0.0433913832257986\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.46206896551724136,\n \"acc_stderr\": 0.041546596717075474,\n \"acc_norm\": 0.46206896551724136,\n \"acc_norm_stderr\": 0.041546596717075474\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.30687830687830686,\n \"acc_stderr\": 0.023752928712112143,\n \"acc_norm\": 0.30687830687830686,\n \"acc_norm_stderr\": 0.023752928712112143\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.29365079365079366,\n \"acc_stderr\": 0.040735243221471255,\n \"acc_norm\": 0.29365079365079366,\n \"acc_norm_stderr\": 0.040735243221471255\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.027869320571664632,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.027869320571664632\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.4187192118226601,\n \"acc_stderr\": 0.034711928605184676,\n \"acc_norm\": 0.4187192118226601,\n \"acc_norm_stderr\": 0.034711928605184676\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.6363636363636364,\n \"acc_stderr\": 0.03756335775187897,\n \"acc_norm\": 0.6363636363636364,\n \"acc_norm_stderr\": 0.03756335775187897\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.6464646464646465,\n \"acc_stderr\": 0.03406086723547155,\n \"acc_norm\": 0.6464646464646465,\n \"acc_norm_stderr\": 0.03406086723547155\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.7046632124352331,\n \"acc_stderr\": 0.032922966391551414,\n \"acc_norm\": 0.7046632124352331,\n \"acc_norm_stderr\": 0.032922966391551414\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.4794871794871795,\n \"acc_stderr\": 0.025329663163489943,\n \"acc_norm\": 0.4794871794871795,\n \"acc_norm_stderr\": 0.025329663163489943\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.2851851851851852,\n \"acc_stderr\": 0.027528599210340496,\n \"acc_norm\": 0.2851851851851852,\n \"acc_norm_stderr\": 0.027528599210340496\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.5042016806722689,\n \"acc_stderr\": 0.03247734334448111,\n \"acc_norm\": 0.5042016806722689,\n \"acc_norm_stderr\": 0.03247734334448111\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.31125827814569534,\n \"acc_stderr\": 0.03780445850526733,\n \"acc_norm\": 0.31125827814569534,\n \"acc_norm_stderr\": 0.03780445850526733\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.6642201834862386,\n \"acc_stderr\": 0.020248081396752923,\n \"acc_norm\": 0.6642201834862386,\n \"acc_norm_stderr\": 0.020248081396752923\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.33796296296296297,\n \"acc_stderr\": 0.03225941352631295,\n \"acc_norm\": 0.33796296296296297,\n \"acc_norm_stderr\": 0.03225941352631295\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.6470588235294118,\n \"acc_stderr\": 0.03354092437591519,\n \"acc_norm\": 0.6470588235294118,\n \"acc_norm_stderr\": 0.03354092437591519\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.679324894514768,\n \"acc_stderr\": 0.030381931949990407,\n \"acc_norm\": 0.679324894514768,\n \"acc_norm_stderr\": 0.030381931949990407\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6098654708520179,\n \"acc_stderr\": 0.03273766725459157,\n \"acc_norm\": 0.6098654708520179,\n \"acc_norm_stderr\": 0.03273766725459157\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.5725190839694656,\n \"acc_stderr\": 0.043389203057924,\n \"acc_norm\": 0.5725190839694656,\n \"acc_norm_stderr\": 0.043389203057924\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7024793388429752,\n \"acc_stderr\": 0.04173349148083499,\n \"acc_norm\": 0.7024793388429752,\n \"acc_norm_stderr\": 0.04173349148083499\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.6203703703703703,\n \"acc_stderr\": 0.04691521224077742,\n \"acc_norm\": 0.6203703703703703,\n \"acc_norm_stderr\": 0.04691521224077742\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.5766871165644172,\n \"acc_stderr\": 0.038818912133343826,\n \"acc_norm\": 0.5766871165644172,\n \"acc_norm_stderr\": 0.038818912133343826\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.33035714285714285,\n \"acc_stderr\": 0.04464285714285714,\n \"acc_norm\": 0.33035714285714285,\n \"acc_norm_stderr\": 0.04464285714285714\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.6893203883495146,\n \"acc_stderr\": 0.04582124160161549,\n \"acc_norm\": 0.6893203883495146,\n \"acc_norm_stderr\": 0.04582124160161549\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.7564102564102564,\n \"acc_stderr\": 0.028120966503914407,\n \"acc_norm\": 0.7564102564102564,\n \"acc_norm_stderr\": 0.028120966503914407\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.6896551724137931,\n \"acc_stderr\": 0.016543785026048304,\n \"acc_norm\": 0.6896551724137931,\n \"acc_norm_stderr\": 0.016543785026048304\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6011560693641619,\n \"acc_stderr\": 0.026362437574546545,\n \"acc_norm\": 0.6011560693641619,\n \"acc_norm_stderr\": 0.026362437574546545\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3687150837988827,\n \"acc_stderr\": 0.016135759015030122,\n \"acc_norm\": 0.3687150837988827,\n \"acc_norm_stderr\": 0.016135759015030122\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6013071895424836,\n \"acc_stderr\": 0.028036092273891765,\n \"acc_norm\": 0.6013071895424836,\n \"acc_norm_stderr\": 0.028036092273891765\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.5980707395498392,\n \"acc_stderr\": 0.027846476005930473,\n \"acc_norm\": 0.5980707395498392,\n \"acc_norm_stderr\": 0.027846476005930473\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.5709876543209876,\n \"acc_stderr\": 0.027538925613470863,\n \"acc_norm\": 0.5709876543209876,\n \"acc_norm_stderr\": 0.027538925613470863\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.36879432624113473,\n \"acc_stderr\": 0.028782227561347237,\n \"acc_norm\": 0.36879432624113473,\n \"acc_norm_stderr\": 0.028782227561347237\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.3878748370273794,\n \"acc_stderr\": 0.012444998309675617,\n \"acc_norm\": 0.3878748370273794,\n \"acc_norm_stderr\": 0.012444998309675617\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.3897058823529412,\n \"acc_stderr\": 0.0296246635811597,\n \"acc_norm\": 0.3897058823529412,\n \"acc_norm_stderr\": 0.0296246635811597\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.4820261437908497,\n \"acc_stderr\": 0.020214761037872408,\n \"acc_norm\": 0.4820261437908497,\n \"acc_norm_stderr\": 0.020214761037872408\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.0469237132203465,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.0469237132203465\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.5877551020408164,\n \"acc_stderr\": 0.03151236044674268,\n \"acc_norm\": 0.5877551020408164,\n \"acc_norm_stderr\": 0.03151236044674268\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6467661691542289,\n \"acc_stderr\": 0.03379790611796777,\n \"acc_norm\": 0.6467661691542289,\n \"acc_norm_stderr\": 0.03379790611796777\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.41566265060240964,\n \"acc_stderr\": 0.03836722176598052,\n \"acc_norm\": 0.41566265060240964,\n \"acc_norm_stderr\": 0.03836722176598052\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7426900584795322,\n \"acc_stderr\": 0.03352799844161865,\n \"acc_norm\": 0.7426900584795322,\n \"acc_norm_stderr\": 0.03352799844161865\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2802937576499388,\n \"mc1_stderr\": 0.015723139524608763,\n \"mc2\": 0.4416133249202012,\n \"mc2_stderr\": 0.01548425276508773\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7387529597474349,\n \"acc_stderr\": 0.01234691486341531\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.008339651250947688,\n \"acc_stderr\": 0.002504942226860527\n }\n}\n```", "repo_url": "https://huggingface.co/openthaigpt/openthaigpt-1.0.0-beta-13b-chat-hf", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|arc:challenge|25_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|arc:challenge|25_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|gsm8k|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|gsm8k|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hellaswag|10_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hellaswag|10_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-27T14-26-13.560554.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-29T13-17-24.378047.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["**/details_harness|winogrande|5_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["**/details_harness|winogrande|5_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-29T13-17-24.378047.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_27T14_26_13.560554", "path": ["results_2023-12-27T14-26-13.560554.parquet"]}, {"split": "2023_12_29T13_17_24.378047", "path": ["results_2023-12-29T13-17-24.378047.parquet"]}, {"split": "latest", "path": ["results_2023-12-29T13-17-24.378047.parquet"]}]}]}
2023-12-29T13:19:46+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of openthaigpt/openthaigpt-1.0.0-beta-13b-chat-hf Dataset automatically created during the evaluation run of model openthaigpt/openthaigpt-1.0.0-beta-13b-chat-hf on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-29T13:17:24.378047(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of openthaigpt/openthaigpt-1.0.0-beta-13b-chat-hf\n\n\n\nDataset automatically created during the evaluation run of model openthaigpt/openthaigpt-1.0.0-beta-13b-chat-hf on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-29T13:17:24.378047(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of openthaigpt/openthaigpt-1.0.0-beta-13b-chat-hf\n\n\n\nDataset automatically created during the evaluation run of model openthaigpt/openthaigpt-1.0.0-beta-13b-chat-hf on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-29T13:17:24.378047(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 203, 67, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of openthaigpt/openthaigpt-1.0.0-beta-13b-chat-hf\n\n\n\nDataset automatically created during the evaluation run of model openthaigpt/openthaigpt-1.0.0-beta-13b-chat-hf on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-29T13:17:24.378047(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]" ]
10d147c690b6cee4c6fc9759c00bd8137465d418
# Dataset Card for Evaluation run of openthaigpt/openthaigpt-1.0.0-beta-7b-chat-ckpt-hf <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [openthaigpt/openthaigpt-1.0.0-beta-7b-chat-ckpt-hf](https://huggingface.co/openthaigpt/openthaigpt-1.0.0-beta-7b-chat-ckpt-hf) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_openthaigpt__openthaigpt-1.0.0-beta-7b-chat-ckpt-hf", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-29T21:18:42.609211](https://huggingface.co/datasets/open-llm-leaderboard/details_openthaigpt__openthaigpt-1.0.0-beta-7b-chat-ckpt-hf/blob/main/results_2023-12-29T21-18-42.609211.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.3651457377147079, "acc_stderr": 0.0337649318691844, "acc_norm": 0.36947752907373566, "acc_norm_stderr": 0.03463087989078143, "mc1": 0.29865361077111385, "mc1_stderr": 0.016021570613768542, "mc2": 0.4999073626978088, "mc2_stderr": 0.015580803887648534 }, "harness|arc:challenge|25": { "acc": 0.41552901023890787, "acc_stderr": 0.014401366641216383, "acc_norm": 0.4496587030716723, "acc_norm_stderr": 0.01453714444428473 }, "harness|hellaswag|10": { "acc": 0.5032861979685321, "acc_stderr": 0.004989673640014256, "acc_norm": 0.7018522206731727, "acc_norm_stderr": 0.004565098421085231 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.35, "acc_stderr": 0.04793724854411021, "acc_norm": 0.35, "acc_norm_stderr": 0.04793724854411021 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.4222222222222222, "acc_stderr": 0.04266763404099582, "acc_norm": 0.4222222222222222, "acc_norm_stderr": 0.04266763404099582 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.3092105263157895, "acc_stderr": 0.037610708698674805, "acc_norm": 0.3092105263157895, "acc_norm_stderr": 0.037610708698674805 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.42, "acc_stderr": 0.04960449637488584, "acc_norm": 0.42, "acc_norm_stderr": 0.04960449637488584 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.35471698113207545, "acc_stderr": 0.029445175328199586, "acc_norm": 0.35471698113207545, "acc_norm_stderr": 0.029445175328199586 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.3611111111111111, "acc_stderr": 0.040166600304512336, "acc_norm": 0.3611111111111111, "acc_norm_stderr": 0.040166600304512336 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.23, "acc_stderr": 0.04229525846816505, "acc_norm": 0.23, "acc_norm_stderr": 0.04229525846816505 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.37, "acc_stderr": 0.04852365870939099, "acc_norm": 0.37, "acc_norm_stderr": 0.04852365870939099 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.31, "acc_stderr": 0.04648231987117316, "acc_norm": 0.31, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.3179190751445087, "acc_stderr": 0.0355068398916558, "acc_norm": 0.3179190751445087, "acc_norm_stderr": 0.0355068398916558 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.21568627450980393, "acc_stderr": 0.04092563958237654, "acc_norm": 0.21568627450980393, "acc_norm_stderr": 0.04092563958237654 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.43, "acc_stderr": 0.04975698519562428, "acc_norm": 0.43, "acc_norm_stderr": 0.04975698519562428 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.35319148936170214, "acc_stderr": 0.031245325202761926, "acc_norm": 0.35319148936170214, "acc_norm_stderr": 0.031245325202761926 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.2719298245614035, "acc_stderr": 0.04185774424022056, "acc_norm": 0.2719298245614035, "acc_norm_stderr": 0.04185774424022056 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.35172413793103446, "acc_stderr": 0.03979236637497411, "acc_norm": 0.35172413793103446, "acc_norm_stderr": 0.03979236637497411 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.25925925925925924, "acc_stderr": 0.02256989707491841, "acc_norm": 0.25925925925925924, "acc_norm_stderr": 0.02256989707491841 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.25396825396825395, "acc_stderr": 0.03893259610604675, "acc_norm": 0.25396825396825395, "acc_norm_stderr": 0.03893259610604675 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.4032258064516129, "acc_stderr": 0.02790615082604114, "acc_norm": 0.4032258064516129, "acc_norm_stderr": 0.02790615082604114 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.27586206896551724, "acc_stderr": 0.031447125816782426, "acc_norm": 0.27586206896551724, "acc_norm_stderr": 0.031447125816782426 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.36, "acc_stderr": 0.048241815132442176, "acc_norm": 0.36, "acc_norm_stderr": 0.048241815132442176 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.47878787878787876, "acc_stderr": 0.03900828913737302, "acc_norm": 0.47878787878787876, "acc_norm_stderr": 0.03900828913737302 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.40404040404040403, "acc_stderr": 0.03496130972056127, "acc_norm": 0.40404040404040403, "acc_norm_stderr": 0.03496130972056127 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.43005181347150256, "acc_stderr": 0.035729543331448066, "acc_norm": 0.43005181347150256, "acc_norm_stderr": 0.035729543331448066 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.2923076923076923, "acc_stderr": 0.023060438380857744, "acc_norm": 0.2923076923076923, "acc_norm_stderr": 0.023060438380857744 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.26296296296296295, "acc_stderr": 0.02684205787383371, "acc_norm": 0.26296296296296295, "acc_norm_stderr": 0.02684205787383371 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.31512605042016806, "acc_stderr": 0.03017680828897434, "acc_norm": 0.31512605042016806, "acc_norm_stderr": 0.03017680828897434 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.2980132450331126, "acc_stderr": 0.037345356767871984, "acc_norm": 0.2980132450331126, "acc_norm_stderr": 0.037345356767871984 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.3926605504587156, "acc_stderr": 0.020937505161201096, "acc_norm": 0.3926605504587156, "acc_norm_stderr": 0.020937505161201096 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.375, "acc_stderr": 0.033016908987210894, "acc_norm": 0.375, "acc_norm_stderr": 0.033016908987210894 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.4215686274509804, "acc_stderr": 0.03465868196380757, "acc_norm": 0.4215686274509804, "acc_norm_stderr": 0.03465868196380757 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.4978902953586498, "acc_stderr": 0.032546938018020076, "acc_norm": 0.4978902953586498, "acc_norm_stderr": 0.032546938018020076 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.3991031390134529, "acc_stderr": 0.03286745312567961, "acc_norm": 0.3991031390134529, "acc_norm_stderr": 0.03286745312567961 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.3511450381679389, "acc_stderr": 0.04186445163013751, "acc_norm": 0.3511450381679389, "acc_norm_stderr": 0.04186445163013751 }, "harness|hendrycksTest-international_law|5": { "acc": 0.6115702479338843, "acc_stderr": 0.04449270350068383, "acc_norm": 0.6115702479338843, "acc_norm_stderr": 0.04449270350068383 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.39814814814814814, "acc_stderr": 0.047323326159788154, "acc_norm": 0.39814814814814814, "acc_norm_stderr": 0.047323326159788154 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.37423312883435583, "acc_stderr": 0.038020681028996146, "acc_norm": 0.37423312883435583, "acc_norm_stderr": 0.038020681028996146 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.2857142857142857, "acc_stderr": 0.04287858751340456, "acc_norm": 0.2857142857142857, "acc_norm_stderr": 0.04287858751340456 }, "harness|hendrycksTest-management|5": { "acc": 0.2815533980582524, "acc_stderr": 0.04453254836326466, "acc_norm": 0.2815533980582524, "acc_norm_stderr": 0.04453254836326466 }, "harness|hendrycksTest-marketing|5": { "acc": 0.47435897435897434, "acc_stderr": 0.03271298896811159, "acc_norm": 0.47435897435897434, "acc_norm_stderr": 0.03271298896811159 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.39, "acc_stderr": 0.04902071300001975, "acc_norm": 0.39, "acc_norm_stderr": 0.04902071300001975 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.44189016602809705, "acc_stderr": 0.017758800534214424, "acc_norm": 0.44189016602809705, "acc_norm_stderr": 0.017758800534214424 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.3786127167630058, "acc_stderr": 0.026113749361310334, "acc_norm": 0.3786127167630058, "acc_norm_stderr": 0.026113749361310334 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.24581005586592178, "acc_stderr": 0.014400296429225612, "acc_norm": 0.24581005586592178, "acc_norm_stderr": 0.014400296429225612 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.34967320261437906, "acc_stderr": 0.0273053080762747, "acc_norm": 0.34967320261437906, "acc_norm_stderr": 0.0273053080762747 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.4115755627009646, "acc_stderr": 0.027950481494401266, "acc_norm": 0.4115755627009646, "acc_norm_stderr": 0.027950481494401266 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.35185185185185186, "acc_stderr": 0.026571483480719978, "acc_norm": 0.35185185185185186, "acc_norm_stderr": 0.026571483480719978 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.3120567375886525, "acc_stderr": 0.027640120545169927, "acc_norm": 0.3120567375886525, "acc_norm_stderr": 0.027640120545169927 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.3155149934810952, "acc_stderr": 0.011869184843058643, "acc_norm": 0.3155149934810952, "acc_norm_stderr": 0.011869184843058643 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.33088235294117646, "acc_stderr": 0.02858270975389844, "acc_norm": 0.33088235294117646, "acc_norm_stderr": 0.02858270975389844 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.4019607843137255, "acc_stderr": 0.019835176484375373, "acc_norm": 0.4019607843137255, "acc_norm_stderr": 0.019835176484375373 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.37272727272727274, "acc_stderr": 0.04631381319425463, "acc_norm": 0.37272727272727274, "acc_norm_stderr": 0.04631381319425463 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.3020408163265306, "acc_stderr": 0.02939360931987981, "acc_norm": 0.3020408163265306, "acc_norm_stderr": 0.02939360931987981 }, "harness|hendrycksTest-sociology|5": { "acc": 0.39303482587064675, "acc_stderr": 0.0345368246603156, "acc_norm": 0.39303482587064675, "acc_norm_stderr": 0.0345368246603156 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.54, "acc_stderr": 0.05009082659620332, "acc_norm": 0.54, "acc_norm_stderr": 0.05009082659620332 }, "harness|hendrycksTest-virology|5": { "acc": 0.3253012048192771, "acc_stderr": 0.03647168523683227, "acc_norm": 0.3253012048192771, "acc_norm_stderr": 0.03647168523683227 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.52046783625731, "acc_stderr": 0.0383161053282193, "acc_norm": 0.52046783625731, "acc_norm_stderr": 0.0383161053282193 }, "harness|truthfulqa:mc|0": { "mc1": 0.29865361077111385, "mc1_stderr": 0.016021570613768542, "mc2": 0.4999073626978088, "mc2_stderr": 0.015580803887648534 }, "harness|winogrande|5": { "acc": 0.6937647987371744, "acc_stderr": 0.012954385972802462 }, "harness|gsm8k|5": { "acc": 0.013646702047005308, "acc_stderr": 0.0031957470754808283 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_openthaigpt__openthaigpt-1.0.0-beta-7b-chat-ckpt-hf
[ "region:us" ]
2023-12-27T14:29:08+00:00
{"pretty_name": "Evaluation run of openthaigpt/openthaigpt-1.0.0-beta-7b-chat-ckpt-hf", "dataset_summary": "Dataset automatically created during the evaluation run of model [openthaigpt/openthaigpt-1.0.0-beta-7b-chat-ckpt-hf](https://huggingface.co/openthaigpt/openthaigpt-1.0.0-beta-7b-chat-ckpt-hf) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_openthaigpt__openthaigpt-1.0.0-beta-7b-chat-ckpt-hf\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-29T21:18:42.609211](https://huggingface.co/datasets/open-llm-leaderboard/details_openthaigpt__openthaigpt-1.0.0-beta-7b-chat-ckpt-hf/blob/main/results_2023-12-29T21-18-42.609211.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.3651457377147079,\n \"acc_stderr\": 0.0337649318691844,\n \"acc_norm\": 0.36947752907373566,\n \"acc_norm_stderr\": 0.03463087989078143,\n \"mc1\": 0.29865361077111385,\n \"mc1_stderr\": 0.016021570613768542,\n \"mc2\": 0.4999073626978088,\n \"mc2_stderr\": 0.015580803887648534\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.41552901023890787,\n \"acc_stderr\": 0.014401366641216383,\n \"acc_norm\": 0.4496587030716723,\n \"acc_norm_stderr\": 0.01453714444428473\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5032861979685321,\n \"acc_stderr\": 0.004989673640014256,\n \"acc_norm\": 0.7018522206731727,\n \"acc_norm_stderr\": 0.004565098421085231\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.04793724854411021,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.04793724854411021\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4222222222222222,\n \"acc_stderr\": 0.04266763404099582,\n \"acc_norm\": 0.4222222222222222,\n \"acc_norm_stderr\": 0.04266763404099582\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.3092105263157895,\n \"acc_stderr\": 0.037610708698674805,\n \"acc_norm\": 0.3092105263157895,\n \"acc_norm_stderr\": 0.037610708698674805\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.42,\n \"acc_stderr\": 0.04960449637488584,\n \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.04960449637488584\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.35471698113207545,\n \"acc_stderr\": 0.029445175328199586,\n \"acc_norm\": 0.35471698113207545,\n \"acc_norm_stderr\": 0.029445175328199586\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.3611111111111111,\n \"acc_stderr\": 0.040166600304512336,\n \"acc_norm\": 0.3611111111111111,\n \"acc_norm_stderr\": 0.040166600304512336\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.23,\n \"acc_stderr\": 0.04229525846816505,\n \"acc_norm\": 0.23,\n \"acc_norm_stderr\": 0.04229525846816505\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.3179190751445087,\n \"acc_stderr\": 0.0355068398916558,\n \"acc_norm\": 0.3179190751445087,\n \"acc_norm_stderr\": 0.0355068398916558\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.21568627450980393,\n \"acc_stderr\": 0.04092563958237654,\n \"acc_norm\": 0.21568627450980393,\n \"acc_norm_stderr\": 0.04092563958237654\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.43,\n \"acc_stderr\": 0.04975698519562428,\n \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.04975698519562428\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.35319148936170214,\n \"acc_stderr\": 0.031245325202761926,\n \"acc_norm\": 0.35319148936170214,\n \"acc_norm_stderr\": 0.031245325202761926\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2719298245614035,\n \"acc_stderr\": 0.04185774424022056,\n \"acc_norm\": 0.2719298245614035,\n \"acc_norm_stderr\": 0.04185774424022056\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.35172413793103446,\n \"acc_stderr\": 0.03979236637497411,\n \"acc_norm\": 0.35172413793103446,\n \"acc_norm_stderr\": 0.03979236637497411\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.25925925925925924,\n \"acc_stderr\": 0.02256989707491841,\n \"acc_norm\": 0.25925925925925924,\n \"acc_norm_stderr\": 0.02256989707491841\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.25396825396825395,\n \"acc_stderr\": 0.03893259610604675,\n \"acc_norm\": 0.25396825396825395,\n \"acc_norm_stderr\": 0.03893259610604675\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.4032258064516129,\n \"acc_stderr\": 0.02790615082604114,\n \"acc_norm\": 0.4032258064516129,\n \"acc_norm_stderr\": 0.02790615082604114\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.27586206896551724,\n \"acc_stderr\": 0.031447125816782426,\n \"acc_norm\": 0.27586206896551724,\n \"acc_norm_stderr\": 0.031447125816782426\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.048241815132442176,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.048241815132442176\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.47878787878787876,\n \"acc_stderr\": 0.03900828913737302,\n \"acc_norm\": 0.47878787878787876,\n \"acc_norm_stderr\": 0.03900828913737302\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.40404040404040403,\n \"acc_stderr\": 0.03496130972056127,\n \"acc_norm\": 0.40404040404040403,\n \"acc_norm_stderr\": 0.03496130972056127\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.43005181347150256,\n \"acc_stderr\": 0.035729543331448066,\n \"acc_norm\": 0.43005181347150256,\n \"acc_norm_stderr\": 0.035729543331448066\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.2923076923076923,\n \"acc_stderr\": 0.023060438380857744,\n \"acc_norm\": 0.2923076923076923,\n \"acc_norm_stderr\": 0.023060438380857744\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.26296296296296295,\n \"acc_stderr\": 0.02684205787383371,\n \"acc_norm\": 0.26296296296296295,\n \"acc_norm_stderr\": 0.02684205787383371\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.31512605042016806,\n \"acc_stderr\": 0.03017680828897434,\n \"acc_norm\": 0.31512605042016806,\n \"acc_norm_stderr\": 0.03017680828897434\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.2980132450331126,\n \"acc_stderr\": 0.037345356767871984,\n \"acc_norm\": 0.2980132450331126,\n \"acc_norm_stderr\": 0.037345356767871984\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.3926605504587156,\n \"acc_stderr\": 0.020937505161201096,\n \"acc_norm\": 0.3926605504587156,\n \"acc_norm_stderr\": 0.020937505161201096\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.375,\n \"acc_stderr\": 0.033016908987210894,\n \"acc_norm\": 0.375,\n \"acc_norm_stderr\": 0.033016908987210894\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.4215686274509804,\n \"acc_stderr\": 0.03465868196380757,\n \"acc_norm\": 0.4215686274509804,\n \"acc_norm_stderr\": 0.03465868196380757\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.4978902953586498,\n \"acc_stderr\": 0.032546938018020076,\n \"acc_norm\": 0.4978902953586498,\n \"acc_norm_stderr\": 0.032546938018020076\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.3991031390134529,\n \"acc_stderr\": 0.03286745312567961,\n \"acc_norm\": 0.3991031390134529,\n \"acc_norm_stderr\": 0.03286745312567961\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.3511450381679389,\n \"acc_stderr\": 0.04186445163013751,\n \"acc_norm\": 0.3511450381679389,\n \"acc_norm_stderr\": 0.04186445163013751\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.6115702479338843,\n \"acc_stderr\": 0.04449270350068383,\n \"acc_norm\": 0.6115702479338843,\n \"acc_norm_stderr\": 0.04449270350068383\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.39814814814814814,\n \"acc_stderr\": 0.047323326159788154,\n \"acc_norm\": 0.39814814814814814,\n \"acc_norm_stderr\": 0.047323326159788154\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.37423312883435583,\n \"acc_stderr\": 0.038020681028996146,\n \"acc_norm\": 0.37423312883435583,\n \"acc_norm_stderr\": 0.038020681028996146\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.2857142857142857,\n \"acc_stderr\": 0.04287858751340456,\n \"acc_norm\": 0.2857142857142857,\n \"acc_norm_stderr\": 0.04287858751340456\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.2815533980582524,\n \"acc_stderr\": 0.04453254836326466,\n \"acc_norm\": 0.2815533980582524,\n \"acc_norm_stderr\": 0.04453254836326466\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.47435897435897434,\n \"acc_stderr\": 0.03271298896811159,\n \"acc_norm\": 0.47435897435897434,\n \"acc_norm_stderr\": 0.03271298896811159\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.39,\n \"acc_stderr\": 0.04902071300001975,\n \"acc_norm\": 0.39,\n \"acc_norm_stderr\": 0.04902071300001975\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.44189016602809705,\n \"acc_stderr\": 0.017758800534214424,\n \"acc_norm\": 0.44189016602809705,\n \"acc_norm_stderr\": 0.017758800534214424\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.3786127167630058,\n \"acc_stderr\": 0.026113749361310334,\n \"acc_norm\": 0.3786127167630058,\n \"acc_norm_stderr\": 0.026113749361310334\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.24581005586592178,\n \"acc_stderr\": 0.014400296429225612,\n \"acc_norm\": 0.24581005586592178,\n \"acc_norm_stderr\": 0.014400296429225612\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.34967320261437906,\n \"acc_stderr\": 0.0273053080762747,\n \"acc_norm\": 0.34967320261437906,\n \"acc_norm_stderr\": 0.0273053080762747\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.4115755627009646,\n \"acc_stderr\": 0.027950481494401266,\n \"acc_norm\": 0.4115755627009646,\n \"acc_norm_stderr\": 0.027950481494401266\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.35185185185185186,\n \"acc_stderr\": 0.026571483480719978,\n \"acc_norm\": 0.35185185185185186,\n \"acc_norm_stderr\": 0.026571483480719978\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.3120567375886525,\n \"acc_stderr\": 0.027640120545169927,\n \"acc_norm\": 0.3120567375886525,\n \"acc_norm_stderr\": 0.027640120545169927\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.3155149934810952,\n \"acc_stderr\": 0.011869184843058643,\n \"acc_norm\": 0.3155149934810952,\n \"acc_norm_stderr\": 0.011869184843058643\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.33088235294117646,\n \"acc_stderr\": 0.02858270975389844,\n \"acc_norm\": 0.33088235294117646,\n \"acc_norm_stderr\": 0.02858270975389844\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.4019607843137255,\n \"acc_stderr\": 0.019835176484375373,\n \"acc_norm\": 0.4019607843137255,\n \"acc_norm_stderr\": 0.019835176484375373\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.37272727272727274,\n \"acc_stderr\": 0.04631381319425463,\n \"acc_norm\": 0.37272727272727274,\n \"acc_norm_stderr\": 0.04631381319425463\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.3020408163265306,\n \"acc_stderr\": 0.02939360931987981,\n \"acc_norm\": 0.3020408163265306,\n \"acc_norm_stderr\": 0.02939360931987981\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.39303482587064675,\n \"acc_stderr\": 0.0345368246603156,\n \"acc_norm\": 0.39303482587064675,\n \"acc_norm_stderr\": 0.0345368246603156\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.3253012048192771,\n \"acc_stderr\": 0.03647168523683227,\n \"acc_norm\": 0.3253012048192771,\n \"acc_norm_stderr\": 0.03647168523683227\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.52046783625731,\n \"acc_stderr\": 0.0383161053282193,\n \"acc_norm\": 0.52046783625731,\n \"acc_norm_stderr\": 0.0383161053282193\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.29865361077111385,\n \"mc1_stderr\": 0.016021570613768542,\n \"mc2\": 0.4999073626978088,\n \"mc2_stderr\": 0.015580803887648534\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6937647987371744,\n \"acc_stderr\": 0.012954385972802462\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.013646702047005308,\n \"acc_stderr\": 0.0031957470754808283\n }\n}\n```", "repo_url": "https://huggingface.co/openthaigpt/openthaigpt-1.0.0-beta-7b-chat-ckpt-hf", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|arc:challenge|25_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|arc:challenge|25_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|gsm8k|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|gsm8k|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hellaswag|10_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hellaswag|10_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-27T14-26-49.538112.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-29T21-18-42.609211.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["**/details_harness|winogrande|5_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["**/details_harness|winogrande|5_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-29T21-18-42.609211.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_27T14_26_49.538112", "path": ["results_2023-12-27T14-26-49.538112.parquet"]}, {"split": "2023_12_29T21_18_42.609211", "path": ["results_2023-12-29T21-18-42.609211.parquet"]}, {"split": "latest", "path": ["results_2023-12-29T21-18-42.609211.parquet"]}]}]}
2023-12-29T21:21:05+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of openthaigpt/openthaigpt-1.0.0-beta-7b-chat-ckpt-hf Dataset automatically created during the evaluation run of model openthaigpt/openthaigpt-1.0.0-beta-7b-chat-ckpt-hf on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-29T21:18:42.609211(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of openthaigpt/openthaigpt-1.0.0-beta-7b-chat-ckpt-hf\n\n\n\nDataset automatically created during the evaluation run of model openthaigpt/openthaigpt-1.0.0-beta-7b-chat-ckpt-hf on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-29T21:18:42.609211(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of openthaigpt/openthaigpt-1.0.0-beta-7b-chat-ckpt-hf\n\n\n\nDataset automatically created during the evaluation run of model openthaigpt/openthaigpt-1.0.0-beta-7b-chat-ckpt-hf on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-29T21:18:42.609211(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 209, 67, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of openthaigpt/openthaigpt-1.0.0-beta-7b-chat-ckpt-hf\n\n\n\nDataset automatically created during the evaluation run of model openthaigpt/openthaigpt-1.0.0-beta-7b-chat-ckpt-hf on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-29T21:18:42.609211(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]" ]
af49fd6e86d4adc83d4dcc212685ea1771a6149d
# Dataset Card for Evaluation run of kyujinpy/Sakura-SOLAR-Instruct <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_kyujinpy__Sakura-SOLAR-Instruct", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-27T14:31:12.994833](https://huggingface.co/datasets/open-llm-leaderboard/details_kyujinpy__Sakura-SOLAR-Instruct/blob/main/results_2023-12-27T14-31-12.994833.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6670794648633737, "acc_stderr": 0.03162151337270039, "acc_norm": 0.6678288182149681, "acc_norm_stderr": 0.03226675533800617, "mc1": 0.5703794369645043, "mc1_stderr": 0.017329234580409095, "mc2": 0.7178732393039171, "mc2_stderr": 0.01499862160665204 }, "harness|arc:challenge|25": { "acc": 0.6825938566552902, "acc_stderr": 0.013602239088038167, "acc_norm": 0.7098976109215017, "acc_norm_stderr": 0.013261573677520767 }, "harness|hellaswag|10": { "acc": 0.7128062139016133, "acc_stderr": 0.004515280911468822, "acc_norm": 0.8841864170483967, "acc_norm_stderr": 0.0031934725302821703 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.42, "acc_stderr": 0.049604496374885836, "acc_norm": 0.42, "acc_norm_stderr": 0.049604496374885836 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6148148148148148, "acc_stderr": 0.04203921040156279, "acc_norm": 0.6148148148148148, "acc_norm_stderr": 0.04203921040156279 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.743421052631579, "acc_stderr": 0.0355418036802569, "acc_norm": 0.743421052631579, "acc_norm_stderr": 0.0355418036802569 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.74, "acc_stderr": 0.0440844002276808, "acc_norm": 0.74, "acc_norm_stderr": 0.0440844002276808 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6830188679245283, "acc_stderr": 0.02863723563980089, "acc_norm": 0.6830188679245283, "acc_norm_stderr": 0.02863723563980089 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7638888888888888, "acc_stderr": 0.03551446610810826, "acc_norm": 0.7638888888888888, "acc_norm_stderr": 0.03551446610810826 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.46, "acc_stderr": 0.05009082659620333, "acc_norm": 0.46, "acc_norm_stderr": 0.05009082659620333 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.52, "acc_stderr": 0.05021167315686779, "acc_norm": 0.52, "acc_norm_stderr": 0.05021167315686779 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.32, "acc_stderr": 0.046882617226215034, "acc_norm": 0.32, "acc_norm_stderr": 0.046882617226215034 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6705202312138728, "acc_stderr": 0.03583901754736412, "acc_norm": 0.6705202312138728, "acc_norm_stderr": 0.03583901754736412 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.38235294117647056, "acc_stderr": 0.04835503696107223, "acc_norm": 0.38235294117647056, "acc_norm_stderr": 0.04835503696107223 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.76, "acc_stderr": 0.042923469599092816, "acc_norm": 0.76, "acc_norm_stderr": 0.042923469599092816 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.6297872340425532, "acc_stderr": 0.03156564682236786, "acc_norm": 0.6297872340425532, "acc_norm_stderr": 0.03156564682236786 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.5, "acc_stderr": 0.047036043419179864, "acc_norm": 0.5, "acc_norm_stderr": 0.047036043419179864 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.6344827586206897, "acc_stderr": 0.040131241954243856, "acc_norm": 0.6344827586206897, "acc_norm_stderr": 0.040131241954243856 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.4947089947089947, "acc_stderr": 0.02574986828855657, "acc_norm": 0.4947089947089947, "acc_norm_stderr": 0.02574986828855657 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.4444444444444444, "acc_stderr": 0.044444444444444495, "acc_norm": 0.4444444444444444, "acc_norm_stderr": 0.044444444444444495 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.34, "acc_stderr": 0.04760952285695235, "acc_norm": 0.34, "acc_norm_stderr": 0.04760952285695235 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.8193548387096774, "acc_stderr": 0.021886178567172534, "acc_norm": 0.8193548387096774, "acc_norm_stderr": 0.021886178567172534 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5123152709359606, "acc_stderr": 0.035169204442208966, "acc_norm": 0.5123152709359606, "acc_norm_stderr": 0.035169204442208966 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.72, "acc_stderr": 0.04512608598542128, "acc_norm": 0.72, "acc_norm_stderr": 0.04512608598542128 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.806060606060606, "acc_stderr": 0.03087414513656209, "acc_norm": 0.806060606060606, "acc_norm_stderr": 0.03087414513656209 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.8686868686868687, "acc_stderr": 0.024063156416822516, "acc_norm": 0.8686868686868687, "acc_norm_stderr": 0.024063156416822516 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.9015544041450777, "acc_stderr": 0.02150024957603347, "acc_norm": 0.9015544041450777, "acc_norm_stderr": 0.02150024957603347 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6666666666666666, "acc_stderr": 0.023901157979402538, "acc_norm": 0.6666666666666666, "acc_norm_stderr": 0.023901157979402538 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.37407407407407406, "acc_stderr": 0.029502861128955286, "acc_norm": 0.37407407407407406, "acc_norm_stderr": 0.029502861128955286 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.7100840336134454, "acc_stderr": 0.029472485833136094, "acc_norm": 0.7100840336134454, "acc_norm_stderr": 0.029472485833136094 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3708609271523179, "acc_stderr": 0.03943966699183629, "acc_norm": 0.3708609271523179, "acc_norm_stderr": 0.03943966699183629 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8495412844036697, "acc_stderr": 0.015328563932669235, "acc_norm": 0.8495412844036697, "acc_norm_stderr": 0.015328563932669235 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.5833333333333334, "acc_stderr": 0.033622774366080424, "acc_norm": 0.5833333333333334, "acc_norm_stderr": 0.033622774366080424 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8578431372549019, "acc_stderr": 0.02450980392156862, "acc_norm": 0.8578431372549019, "acc_norm_stderr": 0.02450980392156862 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.8523206751054853, "acc_stderr": 0.0230943295825957, "acc_norm": 0.8523206751054853, "acc_norm_stderr": 0.0230943295825957 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6816143497757847, "acc_stderr": 0.03126580522513713, "acc_norm": 0.6816143497757847, "acc_norm_stderr": 0.03126580522513713 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7480916030534351, "acc_stderr": 0.03807387116306086, "acc_norm": 0.7480916030534351, "acc_norm_stderr": 0.03807387116306086 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7768595041322314, "acc_stderr": 0.03800754475228733, "acc_norm": 0.7768595041322314, "acc_norm_stderr": 0.03800754475228733 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.8055555555555556, "acc_stderr": 0.038260763248848646, "acc_norm": 0.8055555555555556, "acc_norm_stderr": 0.038260763248848646 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.754601226993865, "acc_stderr": 0.03380939813943354, "acc_norm": 0.754601226993865, "acc_norm_stderr": 0.03380939813943354 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.4642857142857143, "acc_stderr": 0.04733667890053756, "acc_norm": 0.4642857142857143, "acc_norm_stderr": 0.04733667890053756 }, "harness|hendrycksTest-management|5": { "acc": 0.8543689320388349, "acc_stderr": 0.03492606476623791, "acc_norm": 0.8543689320388349, "acc_norm_stderr": 0.03492606476623791 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8504273504273504, "acc_stderr": 0.023365051491753715, "acc_norm": 0.8504273504273504, "acc_norm_stderr": 0.023365051491753715 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.69, "acc_stderr": 0.04648231987117316, "acc_norm": 0.69, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8058748403575989, "acc_stderr": 0.014143970276657567, "acc_norm": 0.8058748403575989, "acc_norm_stderr": 0.014143970276657567 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7601156069364162, "acc_stderr": 0.022989592543123567, "acc_norm": 0.7601156069364162, "acc_norm_stderr": 0.022989592543123567 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.39553072625698327, "acc_stderr": 0.016353415410075775, "acc_norm": 0.39553072625698327, "acc_norm_stderr": 0.016353415410075775 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7581699346405228, "acc_stderr": 0.024518195641879334, "acc_norm": 0.7581699346405228, "acc_norm_stderr": 0.024518195641879334 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.7266881028938906, "acc_stderr": 0.025311765975426122, "acc_norm": 0.7266881028938906, "acc_norm_stderr": 0.025311765975426122 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7839506172839507, "acc_stderr": 0.022899162918445806, "acc_norm": 0.7839506172839507, "acc_norm_stderr": 0.022899162918445806 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.4929078014184397, "acc_stderr": 0.02982449855912901, "acc_norm": 0.4929078014184397, "acc_norm_stderr": 0.02982449855912901 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.49282920469361147, "acc_stderr": 0.012768922739553308, "acc_norm": 0.49282920469361147, "acc_norm_stderr": 0.012768922739553308 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.7426470588235294, "acc_stderr": 0.02655651947004151, "acc_norm": 0.7426470588235294, "acc_norm_stderr": 0.02655651947004151 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6797385620915033, "acc_stderr": 0.018875682938069446, "acc_norm": 0.6797385620915033, "acc_norm_stderr": 0.018875682938069446 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6818181818181818, "acc_stderr": 0.04461272175910509, "acc_norm": 0.6818181818181818, "acc_norm_stderr": 0.04461272175910509 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.7346938775510204, "acc_stderr": 0.028263889943784593, "acc_norm": 0.7346938775510204, "acc_norm_stderr": 0.028263889943784593 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8407960199004975, "acc_stderr": 0.02587064676616913, "acc_norm": 0.8407960199004975, "acc_norm_stderr": 0.02587064676616913 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.91, "acc_stderr": 0.028762349126466125, "acc_norm": 0.91, "acc_norm_stderr": 0.028762349126466125 }, "harness|hendrycksTest-virology|5": { "acc": 0.5843373493975904, "acc_stderr": 0.03836722176598052, "acc_norm": 0.5843373493975904, "acc_norm_stderr": 0.03836722176598052 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.7777777777777778, "acc_stderr": 0.03188578017686398, "acc_norm": 0.7777777777777778, "acc_norm_stderr": 0.03188578017686398 }, "harness|truthfulqa:mc|0": { "mc1": 0.5703794369645043, "mc1_stderr": 0.017329234580409095, "mc2": 0.7178732393039171, "mc2_stderr": 0.01499862160665204 }, "harness|winogrande|5": { "acc": 0.8366219415943172, "acc_stderr": 0.010390695970273766 }, "harness|gsm8k|5": { "acc": 0.6520090978013646, "acc_stderr": 0.013120581030382132 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_kyujinpy__Sakura-SOLAR-Instruct
[ "region:us" ]
2023-12-27T14:33:30+00:00
{"pretty_name": "Evaluation run of kyujinpy/Sakura-SOLAR-Instruct", "dataset_summary": "Dataset automatically created during the evaluation run of model [kyujinpy/Sakura-SOLAR-Instruct](https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_kyujinpy__Sakura-SOLAR-Instruct\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-27T14:31:12.994833](https://huggingface.co/datasets/open-llm-leaderboard/details_kyujinpy__Sakura-SOLAR-Instruct/blob/main/results_2023-12-27T14-31-12.994833.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6670794648633737,\n \"acc_stderr\": 0.03162151337270039,\n \"acc_norm\": 0.6678288182149681,\n \"acc_norm_stderr\": 0.03226675533800617,\n \"mc1\": 0.5703794369645043,\n \"mc1_stderr\": 0.017329234580409095,\n \"mc2\": 0.7178732393039171,\n \"mc2_stderr\": 0.01499862160665204\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6825938566552902,\n \"acc_stderr\": 0.013602239088038167,\n \"acc_norm\": 0.7098976109215017,\n \"acc_norm_stderr\": 0.013261573677520767\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7128062139016133,\n \"acc_stderr\": 0.004515280911468822,\n \"acc_norm\": 0.8841864170483967,\n \"acc_norm_stderr\": 0.0031934725302821703\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6148148148148148,\n \"acc_stderr\": 0.04203921040156279,\n \"acc_norm\": 0.6148148148148148,\n \"acc_norm_stderr\": 0.04203921040156279\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.743421052631579,\n \"acc_stderr\": 0.0355418036802569,\n \"acc_norm\": 0.743421052631579,\n \"acc_norm_stderr\": 0.0355418036802569\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.0440844002276808,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.0440844002276808\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6830188679245283,\n \"acc_stderr\": 0.02863723563980089,\n \"acc_norm\": 0.6830188679245283,\n \"acc_norm_stderr\": 0.02863723563980089\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7638888888888888,\n \"acc_stderr\": 0.03551446610810826,\n \"acc_norm\": 0.7638888888888888,\n \"acc_norm_stderr\": 0.03551446610810826\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.05021167315686779,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.05021167315686779\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6705202312138728,\n \"acc_stderr\": 0.03583901754736412,\n \"acc_norm\": 0.6705202312138728,\n \"acc_norm_stderr\": 0.03583901754736412\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107223,\n \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107223\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.042923469599092816,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.042923469599092816\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.6297872340425532,\n \"acc_stderr\": 0.03156564682236786,\n \"acc_norm\": 0.6297872340425532,\n \"acc_norm_stderr\": 0.03156564682236786\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.047036043419179864,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.047036043419179864\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6344827586206897,\n \"acc_stderr\": 0.040131241954243856,\n \"acc_norm\": 0.6344827586206897,\n \"acc_norm_stderr\": 0.040131241954243856\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4947089947089947,\n \"acc_stderr\": 0.02574986828855657,\n \"acc_norm\": 0.4947089947089947,\n \"acc_norm_stderr\": 0.02574986828855657\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4444444444444444,\n \"acc_stderr\": 0.044444444444444495,\n \"acc_norm\": 0.4444444444444444,\n \"acc_norm_stderr\": 0.044444444444444495\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8193548387096774,\n \"acc_stderr\": 0.021886178567172534,\n \"acc_norm\": 0.8193548387096774,\n \"acc_norm_stderr\": 0.021886178567172534\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5123152709359606,\n \"acc_stderr\": 0.035169204442208966,\n \"acc_norm\": 0.5123152709359606,\n \"acc_norm_stderr\": 0.035169204442208966\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.806060606060606,\n \"acc_stderr\": 0.03087414513656209,\n \"acc_norm\": 0.806060606060606,\n \"acc_norm_stderr\": 0.03087414513656209\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8686868686868687,\n \"acc_stderr\": 0.024063156416822516,\n \"acc_norm\": 0.8686868686868687,\n \"acc_norm_stderr\": 0.024063156416822516\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9015544041450777,\n \"acc_stderr\": 0.02150024957603347,\n \"acc_norm\": 0.9015544041450777,\n \"acc_norm_stderr\": 0.02150024957603347\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.023901157979402538,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.023901157979402538\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.37407407407407406,\n \"acc_stderr\": 0.029502861128955286,\n \"acc_norm\": 0.37407407407407406,\n \"acc_norm_stderr\": 0.029502861128955286\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.7100840336134454,\n \"acc_stderr\": 0.029472485833136094,\n \"acc_norm\": 0.7100840336134454,\n \"acc_norm_stderr\": 0.029472485833136094\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3708609271523179,\n \"acc_stderr\": 0.03943966699183629,\n \"acc_norm\": 0.3708609271523179,\n \"acc_norm_stderr\": 0.03943966699183629\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8495412844036697,\n \"acc_stderr\": 0.015328563932669235,\n \"acc_norm\": 0.8495412844036697,\n \"acc_norm_stderr\": 0.015328563932669235\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5833333333333334,\n \"acc_stderr\": 0.033622774366080424,\n \"acc_norm\": 0.5833333333333334,\n \"acc_norm_stderr\": 0.033622774366080424\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8578431372549019,\n \"acc_stderr\": 0.02450980392156862,\n \"acc_norm\": 0.8578431372549019,\n \"acc_norm_stderr\": 0.02450980392156862\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8523206751054853,\n \"acc_stderr\": 0.0230943295825957,\n \"acc_norm\": 0.8523206751054853,\n \"acc_norm_stderr\": 0.0230943295825957\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6816143497757847,\n \"acc_stderr\": 0.03126580522513713,\n \"acc_norm\": 0.6816143497757847,\n \"acc_norm_stderr\": 0.03126580522513713\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7480916030534351,\n \"acc_stderr\": 0.03807387116306086,\n \"acc_norm\": 0.7480916030534351,\n \"acc_norm_stderr\": 0.03807387116306086\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7768595041322314,\n \"acc_stderr\": 0.03800754475228733,\n \"acc_norm\": 0.7768595041322314,\n \"acc_norm_stderr\": 0.03800754475228733\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8055555555555556,\n \"acc_stderr\": 0.038260763248848646,\n \"acc_norm\": 0.8055555555555556,\n \"acc_norm_stderr\": 0.038260763248848646\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.754601226993865,\n \"acc_stderr\": 0.03380939813943354,\n \"acc_norm\": 0.754601226993865,\n \"acc_norm_stderr\": 0.03380939813943354\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4642857142857143,\n \"acc_stderr\": 0.04733667890053756,\n \"acc_norm\": 0.4642857142857143,\n \"acc_norm_stderr\": 0.04733667890053756\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8543689320388349,\n \"acc_stderr\": 0.03492606476623791,\n \"acc_norm\": 0.8543689320388349,\n \"acc_norm_stderr\": 0.03492606476623791\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8504273504273504,\n \"acc_stderr\": 0.023365051491753715,\n \"acc_norm\": 0.8504273504273504,\n \"acc_norm_stderr\": 0.023365051491753715\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8058748403575989,\n \"acc_stderr\": 0.014143970276657567,\n \"acc_norm\": 0.8058748403575989,\n \"acc_norm_stderr\": 0.014143970276657567\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7601156069364162,\n \"acc_stderr\": 0.022989592543123567,\n \"acc_norm\": 0.7601156069364162,\n \"acc_norm_stderr\": 0.022989592543123567\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.39553072625698327,\n \"acc_stderr\": 0.016353415410075775,\n \"acc_norm\": 0.39553072625698327,\n \"acc_norm_stderr\": 0.016353415410075775\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7581699346405228,\n \"acc_stderr\": 0.024518195641879334,\n \"acc_norm\": 0.7581699346405228,\n \"acc_norm_stderr\": 0.024518195641879334\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7266881028938906,\n \"acc_stderr\": 0.025311765975426122,\n \"acc_norm\": 0.7266881028938906,\n \"acc_norm_stderr\": 0.025311765975426122\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7839506172839507,\n \"acc_stderr\": 0.022899162918445806,\n \"acc_norm\": 0.7839506172839507,\n \"acc_norm_stderr\": 0.022899162918445806\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4929078014184397,\n \"acc_stderr\": 0.02982449855912901,\n \"acc_norm\": 0.4929078014184397,\n \"acc_norm_stderr\": 0.02982449855912901\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.49282920469361147,\n \"acc_stderr\": 0.012768922739553308,\n \"acc_norm\": 0.49282920469361147,\n \"acc_norm_stderr\": 0.012768922739553308\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7426470588235294,\n \"acc_stderr\": 0.02655651947004151,\n \"acc_norm\": 0.7426470588235294,\n \"acc_norm_stderr\": 0.02655651947004151\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6797385620915033,\n \"acc_stderr\": 0.018875682938069446,\n \"acc_norm\": 0.6797385620915033,\n \"acc_norm_stderr\": 0.018875682938069446\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n \"acc_stderr\": 0.04461272175910509,\n \"acc_norm\": 0.6818181818181818,\n \"acc_norm_stderr\": 0.04461272175910509\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7346938775510204,\n \"acc_stderr\": 0.028263889943784593,\n \"acc_norm\": 0.7346938775510204,\n \"acc_norm_stderr\": 0.028263889943784593\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8407960199004975,\n \"acc_stderr\": 0.02587064676616913,\n \"acc_norm\": 0.8407960199004975,\n \"acc_norm_stderr\": 0.02587064676616913\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.91,\n \"acc_stderr\": 0.028762349126466125,\n \"acc_norm\": 0.91,\n \"acc_norm_stderr\": 0.028762349126466125\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5843373493975904,\n \"acc_stderr\": 0.03836722176598052,\n \"acc_norm\": 0.5843373493975904,\n \"acc_norm_stderr\": 0.03836722176598052\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.03188578017686398,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.03188578017686398\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5703794369645043,\n \"mc1_stderr\": 0.017329234580409095,\n \"mc2\": 0.7178732393039171,\n \"mc2_stderr\": 0.01499862160665204\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8366219415943172,\n \"acc_stderr\": 0.010390695970273766\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6520090978013646,\n \"acc_stderr\": 0.013120581030382132\n }\n}\n```", "repo_url": "https://huggingface.co/kyujinpy/Sakura-SOLAR-Instruct", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|arc:challenge|25_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|gsm8k|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hellaswag|10_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-27T14-31-12.994833.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["**/details_harness|winogrande|5_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-27T14-31-12.994833.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_27T14_31_12.994833", "path": ["results_2023-12-27T14-31-12.994833.parquet"]}, {"split": "latest", "path": ["results_2023-12-27T14-31-12.994833.parquet"]}]}]}
2023-12-27T14:33:52+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of kyujinpy/Sakura-SOLAR-Instruct Dataset automatically created during the evaluation run of model kyujinpy/Sakura-SOLAR-Instruct on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-27T14:31:12.994833(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of kyujinpy/Sakura-SOLAR-Instruct\n\n\n\nDataset automatically created during the evaluation run of model kyujinpy/Sakura-SOLAR-Instruct on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-27T14:31:12.994833(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of kyujinpy/Sakura-SOLAR-Instruct\n\n\n\nDataset automatically created during the evaluation run of model kyujinpy/Sakura-SOLAR-Instruct on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-27T14:31:12.994833(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 187, 67, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of kyujinpy/Sakura-SOLAR-Instruct\n\n\n\nDataset automatically created during the evaluation run of model kyujinpy/Sakura-SOLAR-Instruct on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-27T14:31:12.994833(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
d73f91ae9ad38e64ca046442f4d4ad77baf83ebb
# Dataset Card for Evaluation run of buildingthemoon/testfinetunedmodel <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [buildingthemoon/testfinetunedmodel](https://huggingface.co/buildingthemoon/testfinetunedmodel) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_buildingthemoon__testfinetunedmodel", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-27T16:09:40.909898](https://huggingface.co/datasets/open-llm-leaderboard/details_buildingthemoon__testfinetunedmodel/blob/main/results_2023-12-27T16-09-40.909898.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.26047592275815945, "acc_stderr": 0.030833701120092122, "acc_norm": 0.26154537083052126, "acc_norm_stderr": 0.03165249934857471, "mc1": 0.23133414932680538, "mc1_stderr": 0.014761945174862678, "mc2": 0.4074784760160662, "mc2_stderr": 0.015843325926768233 }, "harness|arc:challenge|25": { "acc": 0.2235494880546075, "acc_stderr": 0.012174896631202609, "acc_norm": 0.25853242320819114, "acc_norm_stderr": 0.01279455375428868 }, "harness|hellaswag|10": { "acc": 0.2969527982473611, "acc_stderr": 0.004559817589182069, "acc_norm": 0.31398127862975506, "acc_norm_stderr": 0.004631603539751957 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.19, "acc_stderr": 0.03942772444036625, "acc_norm": 0.19, "acc_norm_stderr": 0.03942772444036625 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.22962962962962963, "acc_stderr": 0.03633384414073462, "acc_norm": 0.22962962962962963, "acc_norm_stderr": 0.03633384414073462 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.17763157894736842, "acc_stderr": 0.031103182383123398, "acc_norm": 0.17763157894736842, "acc_norm_stderr": 0.031103182383123398 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.21, "acc_stderr": 0.04093601807403326, "acc_norm": 0.21, "acc_norm_stderr": 0.04093601807403326 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.2528301886792453, "acc_stderr": 0.02674989977124123, "acc_norm": 0.2528301886792453, "acc_norm_stderr": 0.02674989977124123 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.2569444444444444, "acc_stderr": 0.03653946969442099, "acc_norm": 0.2569444444444444, "acc_norm_stderr": 0.03653946969442099 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.22, "acc_stderr": 0.04163331998932269, "acc_norm": 0.22, "acc_norm_stderr": 0.04163331998932269 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.32, "acc_stderr": 0.046882617226215034, "acc_norm": 0.32, "acc_norm_stderr": 0.046882617226215034 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.27, "acc_stderr": 0.04461960433384741, "acc_norm": 0.27, "acc_norm_stderr": 0.04461960433384741 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.20809248554913296, "acc_stderr": 0.03095289021774988, "acc_norm": 0.20809248554913296, "acc_norm_stderr": 0.03095289021774988 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.21568627450980393, "acc_stderr": 0.04092563958237654, "acc_norm": 0.21568627450980393, "acc_norm_stderr": 0.04092563958237654 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.23, "acc_stderr": 0.04229525846816506, "acc_norm": 0.23, "acc_norm_stderr": 0.04229525846816506 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.26382978723404255, "acc_stderr": 0.02880998985410297, "acc_norm": 0.26382978723404255, "acc_norm_stderr": 0.02880998985410297 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.23684210526315788, "acc_stderr": 0.039994238792813365, "acc_norm": 0.23684210526315788, "acc_norm_stderr": 0.039994238792813365 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.23448275862068965, "acc_stderr": 0.035306258743465914, "acc_norm": 0.23448275862068965, "acc_norm_stderr": 0.035306258743465914 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.24867724867724866, "acc_stderr": 0.022261817692400168, "acc_norm": 0.24867724867724866, "acc_norm_stderr": 0.022261817692400168 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.29365079365079366, "acc_stderr": 0.040735243221471255, "acc_norm": 0.29365079365079366, "acc_norm_stderr": 0.040735243221471255 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.19, "acc_stderr": 0.03942772444036624, "acc_norm": 0.19, "acc_norm_stderr": 0.03942772444036624 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.267741935483871, "acc_stderr": 0.02518900666021238, "acc_norm": 0.267741935483871, "acc_norm_stderr": 0.02518900666021238 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.22167487684729065, "acc_stderr": 0.0292255758924896, "acc_norm": 0.22167487684729065, "acc_norm_stderr": 0.0292255758924896 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.22, "acc_stderr": 0.0416333199893227, "acc_norm": 0.22, "acc_norm_stderr": 0.0416333199893227 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.23030303030303031, "acc_stderr": 0.03287666758603488, "acc_norm": 0.23030303030303031, "acc_norm_stderr": 0.03287666758603488 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.35353535353535354, "acc_stderr": 0.03406086723547153, "acc_norm": 0.35353535353535354, "acc_norm_stderr": 0.03406086723547153 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.36787564766839376, "acc_stderr": 0.03480175668466036, "acc_norm": 0.36787564766839376, "acc_norm_stderr": 0.03480175668466036 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.32051282051282054, "acc_stderr": 0.023661296393964273, "acc_norm": 0.32051282051282054, "acc_norm_stderr": 0.023661296393964273 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.27037037037037037, "acc_stderr": 0.02708037281514566, "acc_norm": 0.27037037037037037, "acc_norm_stderr": 0.02708037281514566 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.2857142857142857, "acc_stderr": 0.029344572500634346, "acc_norm": 0.2857142857142857, "acc_norm_stderr": 0.029344572500634346 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.31125827814569534, "acc_stderr": 0.03780445850526733, "acc_norm": 0.31125827814569534, "acc_norm_stderr": 0.03780445850526733 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.3486238532110092, "acc_stderr": 0.020431254090714328, "acc_norm": 0.3486238532110092, "acc_norm_stderr": 0.020431254090714328 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.4305555555555556, "acc_stderr": 0.03376922151252336, "acc_norm": 0.4305555555555556, "acc_norm_stderr": 0.03376922151252336 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.27941176470588236, "acc_stderr": 0.031493281045079556, "acc_norm": 0.27941176470588236, "acc_norm_stderr": 0.031493281045079556 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.2869198312236287, "acc_stderr": 0.02944377302259469, "acc_norm": 0.2869198312236287, "acc_norm_stderr": 0.02944377302259469 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.14798206278026907, "acc_stderr": 0.023831557157613523, "acc_norm": 0.14798206278026907, "acc_norm_stderr": 0.023831557157613523 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.25190839694656486, "acc_stderr": 0.03807387116306086, "acc_norm": 0.25190839694656486, "acc_norm_stderr": 0.03807387116306086 }, "harness|hendrycksTest-international_law|5": { "acc": 0.24793388429752067, "acc_stderr": 0.03941897526516302, "acc_norm": 0.24793388429752067, "acc_norm_stderr": 0.03941897526516302 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.21296296296296297, "acc_stderr": 0.0395783547198098, "acc_norm": 0.21296296296296297, "acc_norm_stderr": 0.0395783547198098 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.26993865030674846, "acc_stderr": 0.03487825168497892, "acc_norm": 0.26993865030674846, "acc_norm_stderr": 0.03487825168497892 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.22321428571428573, "acc_stderr": 0.039523019677025116, "acc_norm": 0.22321428571428573, "acc_norm_stderr": 0.039523019677025116 }, "harness|hendrycksTest-management|5": { "acc": 0.3786407766990291, "acc_stderr": 0.04802694698258972, "acc_norm": 0.3786407766990291, "acc_norm_stderr": 0.04802694698258972 }, "harness|hendrycksTest-marketing|5": { "acc": 0.2564102564102564, "acc_stderr": 0.02860595370200427, "acc_norm": 0.2564102564102564, "acc_norm_stderr": 0.02860595370200427 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.3, "acc_stderr": 0.046056618647183814, "acc_norm": 0.3, "acc_norm_stderr": 0.046056618647183814 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.210727969348659, "acc_stderr": 0.014583812465862557, "acc_norm": 0.210727969348659, "acc_norm_stderr": 0.014583812465862557 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.24566473988439305, "acc_stderr": 0.02317629820399201, "acc_norm": 0.24566473988439305, "acc_norm_stderr": 0.02317629820399201 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.23798882681564246, "acc_stderr": 0.014242630070574915, "acc_norm": 0.23798882681564246, "acc_norm_stderr": 0.014242630070574915 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.24509803921568626, "acc_stderr": 0.02463004897982478, "acc_norm": 0.24509803921568626, "acc_norm_stderr": 0.02463004897982478 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.2057877813504823, "acc_stderr": 0.022961339906764244, "acc_norm": 0.2057877813504823, "acc_norm_stderr": 0.022961339906764244 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.21604938271604937, "acc_stderr": 0.022899162918445803, "acc_norm": 0.21604938271604937, "acc_norm_stderr": 0.022899162918445803 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.25886524822695034, "acc_stderr": 0.026129572527180848, "acc_norm": 0.25886524822695034, "acc_norm_stderr": 0.026129572527180848 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.24641460234680573, "acc_stderr": 0.011005971399927235, "acc_norm": 0.24641460234680573, "acc_norm_stderr": 0.011005971399927235 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.4485294117647059, "acc_stderr": 0.030211479609121596, "acc_norm": 0.4485294117647059, "acc_norm_stderr": 0.030211479609121596 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.25980392156862747, "acc_stderr": 0.017740899509177788, "acc_norm": 0.25980392156862747, "acc_norm_stderr": 0.017740899509177788 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.20909090909090908, "acc_stderr": 0.03895091015724137, "acc_norm": 0.20909090909090908, "acc_norm_stderr": 0.03895091015724137 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.363265306122449, "acc_stderr": 0.030789051139030802, "acc_norm": 0.363265306122449, "acc_norm_stderr": 0.030789051139030802 }, "harness|hendrycksTest-sociology|5": { "acc": 0.23880597014925373, "acc_stderr": 0.030147775935409217, "acc_norm": 0.23880597014925373, "acc_norm_stderr": 0.030147775935409217 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.22, "acc_stderr": 0.041633319989322674, "acc_norm": 0.22, "acc_norm_stderr": 0.041633319989322674 }, "harness|hendrycksTest-virology|5": { "acc": 0.21084337349397592, "acc_stderr": 0.03175554786629921, "acc_norm": 0.21084337349397592, "acc_norm_stderr": 0.03175554786629921 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.30994152046783624, "acc_stderr": 0.035469769593931624, "acc_norm": 0.30994152046783624, "acc_norm_stderr": 0.035469769593931624 }, "harness|truthfulqa:mc|0": { "mc1": 0.23133414932680538, "mc1_stderr": 0.014761945174862678, "mc2": 0.4074784760160662, "mc2_stderr": 0.015843325926768233 }, "harness|winogrande|5": { "acc": 0.5098658247829518, "acc_stderr": 0.014049749833367596 }, "harness|gsm8k|5": { "acc": 0.0, "acc_stderr": 0.0 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_buildingthemoon__testfinetunedmodel
[ "region:us" ]
2023-12-27T16:11:03+00:00
{"pretty_name": "Evaluation run of buildingthemoon/testfinetunedmodel", "dataset_summary": "Dataset automatically created during the evaluation run of model [buildingthemoon/testfinetunedmodel](https://huggingface.co/buildingthemoon/testfinetunedmodel) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_buildingthemoon__testfinetunedmodel\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-27T16:09:40.909898](https://huggingface.co/datasets/open-llm-leaderboard/details_buildingthemoon__testfinetunedmodel/blob/main/results_2023-12-27T16-09-40.909898.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.26047592275815945,\n \"acc_stderr\": 0.030833701120092122,\n \"acc_norm\": 0.26154537083052126,\n \"acc_norm_stderr\": 0.03165249934857471,\n \"mc1\": 0.23133414932680538,\n \"mc1_stderr\": 0.014761945174862678,\n \"mc2\": 0.4074784760160662,\n \"mc2_stderr\": 0.015843325926768233\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.2235494880546075,\n \"acc_stderr\": 0.012174896631202609,\n \"acc_norm\": 0.25853242320819114,\n \"acc_norm_stderr\": 0.01279455375428868\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.2969527982473611,\n \"acc_stderr\": 0.004559817589182069,\n \"acc_norm\": 0.31398127862975506,\n \"acc_norm_stderr\": 0.004631603539751957\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.19,\n \"acc_stderr\": 0.03942772444036625,\n \"acc_norm\": 0.19,\n \"acc_norm_stderr\": 0.03942772444036625\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.22962962962962963,\n \"acc_stderr\": 0.03633384414073462,\n \"acc_norm\": 0.22962962962962963,\n \"acc_norm_stderr\": 0.03633384414073462\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.17763157894736842,\n \"acc_stderr\": 0.031103182383123398,\n \"acc_norm\": 0.17763157894736842,\n \"acc_norm_stderr\": 0.031103182383123398\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.21,\n \"acc_stderr\": 0.04093601807403326,\n \"acc_norm\": 0.21,\n \"acc_norm_stderr\": 0.04093601807403326\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.2528301886792453,\n \"acc_stderr\": 0.02674989977124123,\n \"acc_norm\": 0.2528301886792453,\n \"acc_norm_stderr\": 0.02674989977124123\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2569444444444444,\n \"acc_stderr\": 0.03653946969442099,\n \"acc_norm\": 0.2569444444444444,\n \"acc_norm_stderr\": 0.03653946969442099\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.22,\n \"acc_stderr\": 0.04163331998932269,\n \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.04163331998932269\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.04461960433384741,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.04461960433384741\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.20809248554913296,\n \"acc_stderr\": 0.03095289021774988,\n \"acc_norm\": 0.20809248554913296,\n \"acc_norm_stderr\": 0.03095289021774988\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.21568627450980393,\n \"acc_stderr\": 0.04092563958237654,\n \"acc_norm\": 0.21568627450980393,\n \"acc_norm_stderr\": 0.04092563958237654\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.23,\n \"acc_stderr\": 0.04229525846816506,\n \"acc_norm\": 0.23,\n \"acc_norm_stderr\": 0.04229525846816506\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.26382978723404255,\n \"acc_stderr\": 0.02880998985410297,\n \"acc_norm\": 0.26382978723404255,\n \"acc_norm_stderr\": 0.02880998985410297\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.23684210526315788,\n \"acc_stderr\": 0.039994238792813365,\n \"acc_norm\": 0.23684210526315788,\n \"acc_norm_stderr\": 0.039994238792813365\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.23448275862068965,\n \"acc_stderr\": 0.035306258743465914,\n \"acc_norm\": 0.23448275862068965,\n \"acc_norm_stderr\": 0.035306258743465914\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.24867724867724866,\n \"acc_stderr\": 0.022261817692400168,\n \"acc_norm\": 0.24867724867724866,\n \"acc_norm_stderr\": 0.022261817692400168\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.29365079365079366,\n \"acc_stderr\": 0.040735243221471255,\n \"acc_norm\": 0.29365079365079366,\n \"acc_norm_stderr\": 0.040735243221471255\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.19,\n \"acc_stderr\": 0.03942772444036624,\n \"acc_norm\": 0.19,\n \"acc_norm_stderr\": 0.03942772444036624\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.267741935483871,\n \"acc_stderr\": 0.02518900666021238,\n \"acc_norm\": 0.267741935483871,\n \"acc_norm_stderr\": 0.02518900666021238\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.22167487684729065,\n \"acc_stderr\": 0.0292255758924896,\n \"acc_norm\": 0.22167487684729065,\n \"acc_norm_stderr\": 0.0292255758924896\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.22,\n \"acc_stderr\": 0.0416333199893227,\n \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.0416333199893227\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.23030303030303031,\n \"acc_stderr\": 0.03287666758603488,\n \"acc_norm\": 0.23030303030303031,\n \"acc_norm_stderr\": 0.03287666758603488\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.35353535353535354,\n \"acc_stderr\": 0.03406086723547153,\n \"acc_norm\": 0.35353535353535354,\n \"acc_norm_stderr\": 0.03406086723547153\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.36787564766839376,\n \"acc_stderr\": 0.03480175668466036,\n \"acc_norm\": 0.36787564766839376,\n \"acc_norm_stderr\": 0.03480175668466036\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.32051282051282054,\n \"acc_stderr\": 0.023661296393964273,\n \"acc_norm\": 0.32051282051282054,\n \"acc_norm_stderr\": 0.023661296393964273\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.27037037037037037,\n \"acc_stderr\": 0.02708037281514566,\n \"acc_norm\": 0.27037037037037037,\n \"acc_norm_stderr\": 0.02708037281514566\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.2857142857142857,\n \"acc_stderr\": 0.029344572500634346,\n \"acc_norm\": 0.2857142857142857,\n \"acc_norm_stderr\": 0.029344572500634346\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.31125827814569534,\n \"acc_stderr\": 0.03780445850526733,\n \"acc_norm\": 0.31125827814569534,\n \"acc_norm_stderr\": 0.03780445850526733\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.3486238532110092,\n \"acc_stderr\": 0.020431254090714328,\n \"acc_norm\": 0.3486238532110092,\n \"acc_norm_stderr\": 0.020431254090714328\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4305555555555556,\n \"acc_stderr\": 0.03376922151252336,\n \"acc_norm\": 0.4305555555555556,\n \"acc_norm_stderr\": 0.03376922151252336\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.27941176470588236,\n \"acc_stderr\": 0.031493281045079556,\n \"acc_norm\": 0.27941176470588236,\n \"acc_norm_stderr\": 0.031493281045079556\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.2869198312236287,\n \"acc_stderr\": 0.02944377302259469,\n \"acc_norm\": 0.2869198312236287,\n \"acc_norm_stderr\": 0.02944377302259469\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.14798206278026907,\n \"acc_stderr\": 0.023831557157613523,\n \"acc_norm\": 0.14798206278026907,\n \"acc_norm_stderr\": 0.023831557157613523\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.25190839694656486,\n \"acc_stderr\": 0.03807387116306086,\n \"acc_norm\": 0.25190839694656486,\n \"acc_norm_stderr\": 0.03807387116306086\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.24793388429752067,\n \"acc_stderr\": 0.03941897526516302,\n \"acc_norm\": 0.24793388429752067,\n \"acc_norm_stderr\": 0.03941897526516302\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.21296296296296297,\n \"acc_stderr\": 0.0395783547198098,\n \"acc_norm\": 0.21296296296296297,\n \"acc_norm_stderr\": 0.0395783547198098\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.26993865030674846,\n \"acc_stderr\": 0.03487825168497892,\n \"acc_norm\": 0.26993865030674846,\n \"acc_norm_stderr\": 0.03487825168497892\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.22321428571428573,\n \"acc_stderr\": 0.039523019677025116,\n \"acc_norm\": 0.22321428571428573,\n \"acc_norm_stderr\": 0.039523019677025116\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.3786407766990291,\n \"acc_stderr\": 0.04802694698258972,\n \"acc_norm\": 0.3786407766990291,\n \"acc_norm_stderr\": 0.04802694698258972\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.2564102564102564,\n \"acc_stderr\": 0.02860595370200427,\n \"acc_norm\": 0.2564102564102564,\n \"acc_norm_stderr\": 0.02860595370200427\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.210727969348659,\n \"acc_stderr\": 0.014583812465862557,\n \"acc_norm\": 0.210727969348659,\n \"acc_norm_stderr\": 0.014583812465862557\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.24566473988439305,\n \"acc_stderr\": 0.02317629820399201,\n \"acc_norm\": 0.24566473988439305,\n \"acc_norm_stderr\": 0.02317629820399201\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.23798882681564246,\n \"acc_stderr\": 0.014242630070574915,\n \"acc_norm\": 0.23798882681564246,\n \"acc_norm_stderr\": 0.014242630070574915\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.24509803921568626,\n \"acc_stderr\": 0.02463004897982478,\n \"acc_norm\": 0.24509803921568626,\n \"acc_norm_stderr\": 0.02463004897982478\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.2057877813504823,\n \"acc_stderr\": 0.022961339906764244,\n \"acc_norm\": 0.2057877813504823,\n \"acc_norm_stderr\": 0.022961339906764244\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.21604938271604937,\n \"acc_stderr\": 0.022899162918445803,\n \"acc_norm\": 0.21604938271604937,\n \"acc_norm_stderr\": 0.022899162918445803\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.25886524822695034,\n \"acc_stderr\": 0.026129572527180848,\n \"acc_norm\": 0.25886524822695034,\n \"acc_norm_stderr\": 0.026129572527180848\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.24641460234680573,\n \"acc_stderr\": 0.011005971399927235,\n \"acc_norm\": 0.24641460234680573,\n \"acc_norm_stderr\": 0.011005971399927235\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.4485294117647059,\n \"acc_stderr\": 0.030211479609121596,\n \"acc_norm\": 0.4485294117647059,\n \"acc_norm_stderr\": 0.030211479609121596\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.25980392156862747,\n \"acc_stderr\": 0.017740899509177788,\n \"acc_norm\": 0.25980392156862747,\n \"acc_norm_stderr\": 0.017740899509177788\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.20909090909090908,\n \"acc_stderr\": 0.03895091015724137,\n \"acc_norm\": 0.20909090909090908,\n \"acc_norm_stderr\": 0.03895091015724137\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.363265306122449,\n \"acc_stderr\": 0.030789051139030802,\n \"acc_norm\": 0.363265306122449,\n \"acc_norm_stderr\": 0.030789051139030802\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.23880597014925373,\n \"acc_stderr\": 0.030147775935409217,\n \"acc_norm\": 0.23880597014925373,\n \"acc_norm_stderr\": 0.030147775935409217\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.22,\n \"acc_stderr\": 0.041633319989322674,\n \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.041633319989322674\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.21084337349397592,\n \"acc_stderr\": 0.03175554786629921,\n \"acc_norm\": 0.21084337349397592,\n \"acc_norm_stderr\": 0.03175554786629921\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.30994152046783624,\n \"acc_stderr\": 0.035469769593931624,\n \"acc_norm\": 0.30994152046783624,\n \"acc_norm_stderr\": 0.035469769593931624\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.23133414932680538,\n \"mc1_stderr\": 0.014761945174862678,\n \"mc2\": 0.4074784760160662,\n \"mc2_stderr\": 0.015843325926768233\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5098658247829518,\n \"acc_stderr\": 0.014049749833367596\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n }\n}\n```", "repo_url": "https://huggingface.co/buildingthemoon/testfinetunedmodel", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|arc:challenge|25_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|gsm8k|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hellaswag|10_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-27T16-09-40.909898.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["**/details_harness|winogrande|5_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-27T16-09-40.909898.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_27T16_09_40.909898", "path": ["results_2023-12-27T16-09-40.909898.parquet"]}, {"split": "latest", "path": ["results_2023-12-27T16-09-40.909898.parquet"]}]}]}
2023-12-27T16:11:23+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of buildingthemoon/testfinetunedmodel Dataset automatically created during the evaluation run of model buildingthemoon/testfinetunedmodel on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-27T16:09:40.909898(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of buildingthemoon/testfinetunedmodel\n\n\n\nDataset automatically created during the evaluation run of model buildingthemoon/testfinetunedmodel on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-27T16:09:40.909898(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of buildingthemoon/testfinetunedmodel\n\n\n\nDataset automatically created during the evaluation run of model buildingthemoon/testfinetunedmodel on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-27T16:09:40.909898(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 179, 67, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of buildingthemoon/testfinetunedmodel\n\n\n\nDataset automatically created during the evaluation run of model buildingthemoon/testfinetunedmodel on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-27T16:09:40.909898(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
7d72a2464d1394fb3d0d35105467019d1ca02527
No- robots dataset translated to Kannada (KN) with chat and code data removed.
Tensoic/no_robots_kn
[ "task_categories:text-generation", "language:kn", "license:cc-by-nc-4.0", "region:us" ]
2023-12-27T16:15:10+00:00
{"language": ["kn"], "license": "cc-by-nc-4.0", "task_categories": ["text-generation"], "dataset_info": {"features": [{"name": "RESPONSE", "dtype": "string"}, {"name": "INSTRUCTION", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 30672322, "num_examples": 8370}], "download_size": 11600467, "dataset_size": 30672322}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-03T16:45:09+00:00
[]
[ "kn" ]
TAGS #task_categories-text-generation #language-Kannada #license-cc-by-nc-4.0 #region-us
No- robots dataset translated to Kannada (KN) with chat and code data removed.
[]
[ "TAGS\n#task_categories-text-generation #language-Kannada #license-cc-by-nc-4.0 #region-us \n" ]
[ 33 ]
[ "passage: TAGS\n#task_categories-text-generation #language-Kannada #license-cc-by-nc-4.0 #region-us \n" ]
14d288b8dc42542d59ef0434cc4cc6fba807acaa
# Dataset Card for `Shakespearean and Modern English Conversational Dataset` ## Table of Contents - [Dataset Card for `Shakespearean and Modern English Conversational Dataset`](#dataset-card-for-shakespearean-and-modern-english-conversational-dataset) - [Table of Contents](#table-of-contents) - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) ## Dataset Description - **Homepage:** [SandMec](https://roudranil.github.io/datasets/SandMec) - **Repository:** [Roudranil/shakespearean-chatbot](https://github.com/Roudranil/finetuning-llms-for-conversation-in-shakespearean-english) - **Point of Contact:** [[email protected]](mailto:[email protected]) ### Dataset Summary This dataset contains dialog pairs taken from Shakespeare's works - the first dialog is a translated text in modern english, and the second dialog is it's actual response as written in Shakespeare's plays. See the [github repo](https://github.com/Roudranil/finetuning-llms-for-conversation-in-shakespearean-english) for more details.
Roudranil/shakespearean-and-modern-english-conversational-dataset
[ "size_categories:n<10K", "language:en", "fine-tuning", "shakespeare", "region:us" ]
2023-12-27T16:36:01+00:00
{"language": ["en"], "size_categories": ["n<10K"], "pretty_name": "SandMec", "tags": ["fine-tuning", "shakespeare"], "task-categories": ["text-generation"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train.csv"}, {"split": "test", "path": "data/test.csv"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "translated_dialog", "dtype": "string"}, {"name": "og_response", "dtype": "string"}]}}
2023-12-28T18:11:41+00:00
[]
[ "en" ]
TAGS #size_categories-n<10K #language-English #fine-tuning #shakespeare #region-us
# Dataset Card for 'Shakespearean and Modern English Conversational Dataset' ## Table of Contents - Dataset Card for 'Shakespearean and Modern English Conversational Dataset' - Table of Contents - Dataset Description - Dataset Summary ## Dataset Description - Homepage: SandMec - Repository: Roudranil/shakespearean-chatbot - Point of Contact: roudranil@URL ### Dataset Summary This dataset contains dialog pairs taken from Shakespeare's works - the first dialog is a translated text in modern english, and the second dialog is it's actual response as written in Shakespeare's plays. See the github repo for more details.
[ "# Dataset Card for 'Shakespearean and Modern English Conversational Dataset'", "## Table of Contents\n- Dataset Card for 'Shakespearean and Modern English Conversational Dataset'\n - Table of Contents\n - Dataset Description\n - Dataset Summary", "## Dataset Description\n\n- Homepage: SandMec\n- Repository: Roudranil/shakespearean-chatbot\n- Point of Contact: roudranil@URL", "### Dataset Summary\n\nThis dataset contains dialog pairs taken from Shakespeare's works - the first dialog is a translated text in modern english, and the second dialog is it's actual response as written in Shakespeare's plays. See the github repo for more details." ]
[ "TAGS\n#size_categories-n<10K #language-English #fine-tuning #shakespeare #region-us \n", "# Dataset Card for 'Shakespearean and Modern English Conversational Dataset'", "## Table of Contents\n- Dataset Card for 'Shakespearean and Modern English Conversational Dataset'\n - Table of Contents\n - Dataset Description\n - Dataset Summary", "## Dataset Description\n\n- Homepage: SandMec\n- Repository: Roudranil/shakespearean-chatbot\n- Point of Contact: roudranil@URL", "### Dataset Summary\n\nThis dataset contains dialog pairs taken from Shakespeare's works - the first dialog is a translated text in modern english, and the second dialog is it's actual response as written in Shakespeare's plays. See the github repo for more details." ]
[ 30, 20, 39, 37, 62 ]
[ "passage: TAGS\n#size_categories-n<10K #language-English #fine-tuning #shakespeare #region-us \n# Dataset Card for 'Shakespearean and Modern English Conversational Dataset'## Table of Contents\n- Dataset Card for 'Shakespearean and Modern English Conversational Dataset'\n - Table of Contents\n - Dataset Description\n - Dataset Summary## Dataset Description\n\n- Homepage: SandMec\n- Repository: Roudranil/shakespearean-chatbot\n- Point of Contact: roudranil@URL### Dataset Summary\n\nThis dataset contains dialog pairs taken from Shakespeare's works - the first dialog is a translated text in modern english, and the second dialog is it's actual response as written in Shakespeare's plays. See the github repo for more details." ]
378d10de89a5078b799d61228729c15d5e7db459
## Table of Contents - [Dataset Summary](#dataset-summary) - [Dataset Attribution](#dataset-attribution) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Dataset Use](#dataset-use) - [Use Cases](#use-cases) - [Usage Caveats](#usage-caveats) - [Getting Started](#getting-started) <p><h1>🐋 The OpenOrca Dataset! 🐋</h1></p> ![OpenOrca Logo](https://huggingface.co/datasets/Open-Orca/OpenOrca/resolve/main/OpenOrcaLogo.png "OpenOrca Logo") <a name="dataset-announcement"></a> We are thrilled to announce the release of the OpenOrca dataset! This rich collection of augmented FLAN data aligns, as best as possible, with the distributions outlined in the [Orca paper](https://arxiv.org/abs/2306.02707). It has been instrumental in generating high-performing model checkpoints and serves as a valuable resource for all NLP researchers and developers! # Official Models ## Mistral-7B-OpenOrca Our [latest model](https://huggingface.co/spaces/Open-Orca/Mistral-7B-OpenOrca), the first 7B to score better overall than all previous models below 30B. 98% of Llama2-70b-chat's performance, in a completely open 7B! ## OpenOrca-Platypus2-13B Our [third model](https://huggingface.co/Open-Orca/OpenOrca-Platypus2-13B), the first 13B model to score higher than LLaMA1-65B on the HuggingFace Leaderboard! Released in partnership with Platypus. ## LlongOrca 7B & 13B * Our [first 7B release](https://huggingface.co/Open-Orca/LlongOrca-7B-16k), trained on top of LLongMA2 to achieve 16,000 tokens context. #1 long context 7B model at release time, with >99% of the overall #1 model's performance. * [LlongOrca-13B-16k](https://huggingface.co/Open-Orca/LlongOrca-13B-16k), trained on top of LLongMA2. #1 long context 13B model at release time, with >97% of the overall #1 model's performance. ## OpenOrcaxOpenChat-Preview2-13B Our [second model](https://huggingface.co/Open-Orca/OpenOrcaxOpenChat-Preview2-13B), highlighting that we've surpassed the performance reported in the Orca paper. Was #1 at release time, now surpassed by our own OpenOrca-Platypus2-13B. Released in partnership with OpenChat. ## OpenOrca-Preview1-13B [OpenOrca-Preview1-13B](https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B) This model was trained in less than a day, for <$200, with <10% of our data. At release, it beat the current state of the art models on BigBench-Hard and AGIEval. Achieves ~60% of the improvements reported in the Orca paper. <a name="dataset-summary"></a> # Dataset Summary The OpenOrca dataset is a collection of augmented [FLAN Collection data](https://arxiv.org/abs/2301.13688). Currently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions. It is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope. The data is primarily used for training and evaluation in the field of natural language processing. <a name="dataset-attribution"></a> # Dataset Attribution We would like to give special recognition to the following contributors for their significant efforts and dedication: Teknium WingLian/Caseus Eric Hartford NanoBit Pankaj Winddude Rohan http://AlignmentLab.ai: Autometa Entropi AtlasUnified NeverendingToast NanoBit WingLian/Caseus Also of course, as always, TheBloke, for being the backbone of the whole community. Many thanks to NanoBit and Caseus, makers of [Axolotl](https://github.com/OpenAccess-AI-Collective/axolotl), for lending us their expertise on the platform that developed and trained manticore, minotaur, and many others! We are welcoming sponsors or collaborators to help us build these models to the scale they deserve. Please reach out via our socials: http://Alignmentlab.ai https://discord.gg/n9hXaBPWxx Want to visualize our full dataset? Check out our [Nomic Atlas Map](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2). [<img src="https://huggingface.co/Open-Orca/OpenOrca-Preview1-13B/resolve/main/OpenOrca%20Nomic%20Atlas.png" alt="Atlas Nomic Dataset Map" width="400" height="400" />](https://atlas.nomic.ai/map/c1b88b47-2d9b-47e0-9002-b80766792582/2560fd25-52fe-42f1-a58f-ff5eccc890d2) <a name="supported-tasks-and-leaderboards"></a> # Supported Tasks and Leaderboards This dataset supports a range of tasks including language modeling, text generation, and text augmentation. It has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing. Further information on leaderboards will be updated as they become available. <a name="languages"></a> # Languages The language of the data is primarily English. <a name="dataset-structure"></a> # Dataset Structure <a name="data-instances"></a> ## Data Instances A data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5. The response is then entered into the response field. <a name="data-fields"></a> ## Data Fields The fields are: 1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from. 2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint 3) 'question', representing a question entry as provided by the FLAN Collection 4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4. <a name="data-splits"></a> ## Data Splits The data is unsplit. <a name="dataset-creation"></a> # Dataset Creation <a name="curation-rationale"></a> ## Curation Rationale The dataset was created to provide a source of augmented text data for researchers and developers. The datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4. This "reasoning trace" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on. <a name="source-data"></a> ## Source Data The data is generated using techniques in alignment with the distributions outlined in the Orca paper, except as noted below: 1) There is not enough CoT data in the FLAN Collection to generate 150K zero-shot entries, as the paper purports to use. We suspect this portion was either undocumented or misrepresented. We have used the ~75K points available. 2) We used the pre-generated FLAN Collection datasets hosted on HuggingFace under conceptofmind, e.g. [conceptofmind/flan2021](https://huggingface.co/datasets/conceptofmind/flan2021_submix_original). These are referenced by the [official FLAN Collection repo](https://github.com/google-research/FLAN/tree/main/flan/v2) as the preferred data source. However, these are a subset of the full FLAN Collection data, and have less than the required entries for the flan2021 and t0 submixes, by ~1.25M and 200k respectively. Combined, this gave us ~1.5M fewer datapoints than in the original Orca paper. Completing the set is an ongoing work. <a name="dataset-use"></a> # Dataset Use <a name="use-cases"></a> ## Use Cases The dataset can be used for tasks related to language understanding, natural language processing, machine learning model training, and model performance evaluation. <a name="usage-caveats"></a> ## Usage Caveats Given that this is a work-in-progress dataset, it is recommended to regularly check for updates and improvements. Further, the data should be used in accordance with the guidelines and recommendations outlined in the Orca paper. <a name="getting-started"></a> ## Getting Started This dataset is organized such that it can be naively loaded via Hugging Face datasets library. We recommend using streaming due to the large size of the files. Regular updates and data generation progress can be monitored through the OpenOrca repository on Hugging Face. # Citation ```bibtex @misc{OpenOrca, title = {OpenOrca: An Open Dataset of GPT Augmented FLAN Reasoning Traces}, author = {Wing Lian and Bleys Goodson and Eugene Pentland and Austin Cook and Chanvichet Vong and "Teknium"}, year = {2023}, publisher = {HuggingFace}, journal = {HuggingFace repository}, howpublished = {\url{https://https://huggingface.co/Open-Orca/OpenOrca}}, } ``` ```bibtex @misc{mukherjee2023orca, title={Orca: Progressive Learning from Complex Explanation Traces of GPT-4}, author={Subhabrata Mukherjee and Arindam Mitra and Ganesh Jawahar and Sahaj Agarwal and Hamid Palangi and Ahmed Awadallah}, year={2023}, eprint={2306.02707}, archivePrefix={arXiv}, primaryClass={cs.CL} } ``` ```bibtex @misc{longpre2023flan, title={The Flan Collection: Designing Data and Methods for Effective Instruction Tuning}, author={Shayne Longpre and Le Hou and Tu Vu and Albert Webson and Hyung Won Chung and Yi Tay and Denny Zhou and Quoc V. Le and Barret Zoph and Jason Wei and Adam Roberts}, year={2023}, eprint={2301.13688}, archivePrefix={arXiv}, primaryClass={cs.AI} } ``` ```bibtex @misc{touvron2023llama, title={Llama 2: Open Foundation and Fine-Tuned Chat Models}, author={Hugo Touvron and Louis Martin and Kevin Stone and Peter Albert and Amjad Almahairi and Yasmine Babaei and Nikolay Bashlykov and Soumya Batra and Prajjwal Bhargava and Shruti Bhosale and Dan Bikel and Lukas Blecher and Cristian Canton Ferrer and Moya Chen and Guillem Cucurull and David Esiobu and Jude Fernandes and Jeremy Fu and Wenyin Fu and Brian Fuller and Cynthia Gao and Vedanuj Goswami and Naman Goyal and Anthony Hartshorn and Saghar Hosseini and Rui Hou and Hakan Inan and Marcin Kardas and Viktor Kerkez and Madian Khabsa and Isabel Kloumann and Artem Korenev and Punit Singh Koura and Marie-Anne Lachaux and Thibaut Lavril and Jenya Lee and Diana Liskovich and Yinghai Lu and Yuning Mao and Xavier Martinet and Todor Mihaylov and Pushkar Mishra and Igor Molybog and Yixin Nie and Andrew Poulton and Jeremy Reizenstein and Rashi Rungta and Kalyan Saladi and Alan Schelten and Ruan Silva and Eric Michael Smith and Ranjan Subramanian and Xiaoqing Ellen Tan and Binh Tang and Ross Taylor and Adina Williams and Jian Xiang Kuan and Puxin Xu and Zheng Yan and Iliyan Zarov and Yuchen Zhang and Angela Fan and Melanie Kambadur and Sharan Narang and Aurelien Rodriguez and Robert Stojnic and Sergey Edunov and Thomas Scialom}, year={2023}, eprint= arXiv 2307.09288 } @software{touvron2023llama, title={LLaMA: Open and Efficient Foundation Language Models}, author={Touvron, Hugo and Lavril, Thibaut and Izacard, Gautier and Martinet, Xavier and Lachaux, Marie-Anne and Lacroix, Timoth{\'e}e and Rozi{\`e}re, Baptiste and Goyal, Naman and Hambro, Eric and Azhar, Faisal and Rodriguez, Aurelien and Joulin, Armand and Grave, Edouard and Lample, Guillaume}, journal={arXiv preprint arXiv:2302.13971}, year={2023} } ```
polinaeterna/OpenOrca
[ "task_categories:conversational", "task_categories:text-classification", "task_categories:token-classification", "task_categories:table-question-answering", "task_categories:question-answering", "task_categories:zero-shot-classification", "task_categories:summarization", "task_categories:feature-extraction", "task_categories:text-generation", "task_categories:text2text-generation", "size_categories:10M<n<100M", "language:en", "license:mit", "arxiv:2306.02707", "arxiv:2301.13688", "region:us" ]
2023-12-27T17:23:46+00:00
{"language": ["en"], "license": "mit", "size_categories": ["10M<n<100M"], "task_categories": ["conversational", "text-classification", "token-classification", "table-question-answering", "question-answering", "zero-shot-classification", "summarization", "feature-extraction", "text-generation", "text2text-generation"], "pretty_name": "OpenOrca"}
2023-12-27T17:23:49+00:00
[ "2306.02707", "2301.13688" ]
[ "en" ]
TAGS #task_categories-conversational #task_categories-text-classification #task_categories-token-classification #task_categories-table-question-answering #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-summarization #task_categories-feature-extraction #task_categories-text-generation #task_categories-text2text-generation #size_categories-10M<n<100M #language-English #license-mit #arxiv-2306.02707 #arxiv-2301.13688 #region-us
## Table of Contents - Dataset Summary - Dataset Attribution - Supported Tasks and Leaderboards - Languages - Dataset Structure - Data Instances - Data Fields - Data Splits - Dataset Creation - Curation Rationale - Source Data - Dataset Use - Use Cases - Usage Caveats - Getting Started <p><h1> The OpenOrca Dataset! </h1></p> !OpenOrca Logo <a name="dataset-announcement"></a> We are thrilled to announce the release of the OpenOrca dataset! This rich collection of augmented FLAN data aligns, as best as possible, with the distributions outlined in the Orca paper. It has been instrumental in generating high-performing model checkpoints and serves as a valuable resource for all NLP researchers and developers! # Official Models ## Mistral-7B-OpenOrca Our latest model, the first 7B to score better overall than all previous models below 30B. 98% of Llama2-70b-chat's performance, in a completely open 7B! ## OpenOrca-Platypus2-13B Our third model, the first 13B model to score higher than LLaMA1-65B on the HuggingFace Leaderboard! Released in partnership with Platypus. ## LlongOrca 7B & 13B * Our first 7B release, trained on top of LLongMA2 to achieve 16,000 tokens context. #1 long context 7B model at release time, with >99% of the overall #1 model's performance. * LlongOrca-13B-16k, trained on top of LLongMA2. #1 long context 13B model at release time, with >97% of the overall #1 model's performance. ## OpenOrcaxOpenChat-Preview2-13B Our second model, highlighting that we've surpassed the performance reported in the Orca paper. Was #1 at release time, now surpassed by our own OpenOrca-Platypus2-13B. Released in partnership with OpenChat. ## OpenOrca-Preview1-13B OpenOrca-Preview1-13B This model was trained in less than a day, for <$200, with <10% of our data. At release, it beat the current state of the art models on BigBench-Hard and AGIEval. Achieves ~60% of the improvements reported in the Orca paper. <a name="dataset-summary"></a> # Dataset Summary The OpenOrca dataset is a collection of augmented FLAN Collection data. Currently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions. It is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope. The data is primarily used for training and evaluation in the field of natural language processing. <a name="dataset-attribution"></a> # Dataset Attribution We would like to give special recognition to the following contributors for their significant efforts and dedication: Teknium WingLian/Caseus Eric Hartford NanoBit Pankaj Winddude Rohan URL: Autometa Entropi AtlasUnified NeverendingToast NanoBit WingLian/Caseus Also of course, as always, TheBloke, for being the backbone of the whole community. Many thanks to NanoBit and Caseus, makers of Axolotl, for lending us their expertise on the platform that developed and trained manticore, minotaur, and many others! We are welcoming sponsors or collaborators to help us build these models to the scale they deserve. Please reach out via our socials: URL URL Want to visualize our full dataset? Check out our Nomic Atlas Map. <img src="URL alt="Atlas Nomic Dataset Map" width="400" height="400" /> <a name="supported-tasks-and-leaderboards"></a> # Supported Tasks and Leaderboards This dataset supports a range of tasks including language modeling, text generation, and text augmentation. It has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing. Further information on leaderboards will be updated as they become available. <a name="languages"></a> # Languages The language of the data is primarily English. <a name="dataset-structure"></a> # Dataset Structure <a name="data-instances"></a> ## Data Instances A data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5. The response is then entered into the response field. <a name="data-fields"></a> ## Data Fields The fields are: 1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from. 2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint 3) 'question', representing a question entry as provided by the FLAN Collection 4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4. <a name="data-splits"></a> ## Data Splits The data is unsplit. <a name="dataset-creation"></a> # Dataset Creation <a name="curation-rationale"></a> ## Curation Rationale The dataset was created to provide a source of augmented text data for researchers and developers. The datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4. This "reasoning trace" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on. <a name="source-data"></a> ## Source Data The data is generated using techniques in alignment with the distributions outlined in the Orca paper, except as noted below: 1) There is not enough CoT data in the FLAN Collection to generate 150K zero-shot entries, as the paper purports to use. We suspect this portion was either undocumented or misrepresented. We have used the ~75K points available. 2) We used the pre-generated FLAN Collection datasets hosted on HuggingFace under conceptofmind, e.g. conceptofmind/flan2021. These are referenced by the official FLAN Collection repo as the preferred data source. However, these are a subset of the full FLAN Collection data, and have less than the required entries for the flan2021 and t0 submixes, by ~1.25M and 200k respectively. Combined, this gave us ~1.5M fewer datapoints than in the original Orca paper. Completing the set is an ongoing work. <a name="dataset-use"></a> # Dataset Use <a name="use-cases"></a> ## Use Cases The dataset can be used for tasks related to language understanding, natural language processing, machine learning model training, and model performance evaluation. <a name="usage-caveats"></a> ## Usage Caveats Given that this is a work-in-progress dataset, it is recommended to regularly check for updates and improvements. Further, the data should be used in accordance with the guidelines and recommendations outlined in the Orca paper. <a name="getting-started"></a> ## Getting Started This dataset is organized such that it can be naively loaded via Hugging Face datasets library. We recommend using streaming due to the large size of the files. Regular updates and data generation progress can be monitored through the OpenOrca repository on Hugging Face.
[ "## Table of Contents\n- Dataset Summary\n- Dataset Attribution\n- Supported Tasks and Leaderboards\n- Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n- Dataset Use\n - Use Cases\n - Usage Caveats\n - Getting Started\n\n\n<p><h1> The OpenOrca Dataset! </h1></p>\n\n!OpenOrca Logo\n\n<a name=\"dataset-announcement\"></a>\n\nWe are thrilled to announce the release of the OpenOrca dataset!\nThis rich collection of augmented FLAN data aligns, as best as possible, with the distributions outlined in the Orca paper.\nIt has been instrumental in generating high-performing model checkpoints and serves as a valuable resource for all NLP researchers and developers!", "# Official Models", "## Mistral-7B-OpenOrca\n\nOur latest model, the first 7B to score better overall than all previous models below 30B.\n98% of Llama2-70b-chat's performance, in a completely open 7B!", "## OpenOrca-Platypus2-13B\n\nOur third model, the first 13B model to score higher than LLaMA1-65B on the HuggingFace Leaderboard!\nReleased in partnership with Platypus.", "## LlongOrca 7B & 13B\n\n* Our first 7B release, trained on top of LLongMA2 to achieve 16,000 tokens context. #1 long context 7B model at release time, with >99% of the overall #1 model's performance.\n* LlongOrca-13B-16k, trained on top of LLongMA2. #1 long context 13B model at release time, with >97% of the overall #1 model's performance.", "## OpenOrcaxOpenChat-Preview2-13B\n\nOur second model, highlighting that we've surpassed the performance reported in the Orca paper.\nWas #1 at release time, now surpassed by our own OpenOrca-Platypus2-13B.\nReleased in partnership with OpenChat.", "## OpenOrca-Preview1-13B\n\nOpenOrca-Preview1-13B\nThis model was trained in less than a day, for <$200, with <10% of our data.\nAt release, it beat the current state of the art models on BigBench-Hard and AGIEval. Achieves ~60% of the improvements reported in the Orca paper.\n\n<a name=\"dataset-summary\"></a>", "# Dataset Summary\n\nThe OpenOrca dataset is a collection of augmented FLAN Collection data.\nCurrently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.\nIt is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.\nThe data is primarily used for training and evaluation in the field of natural language processing.\n\n<a name=\"dataset-attribution\"></a>", "# Dataset Attribution\n\nWe would like to give special recognition to the following contributors for their significant efforts and dedication:\n \n\n Teknium \n WingLian/Caseus\n Eric Hartford\n NanoBit\n Pankaj\n Winddude\n Rohan\n\n URL:\n Autometa\n Entropi\n AtlasUnified\n NeverendingToast\n NanoBit\n WingLian/Caseus\n\nAlso of course, as always, TheBloke, for being the backbone of the whole community.\n\nMany thanks to NanoBit and Caseus, makers of Axolotl, for lending us their expertise on the platform that developed and trained manticore, minotaur, and many others! \n\nWe are welcoming sponsors or collaborators to help us build these models to the scale they deserve. Please reach out via our socials:\nURL URL\n\nWant to visualize our full dataset? Check out our Nomic Atlas Map.\n <img src=\"URL alt=\"Atlas Nomic Dataset Map\" width=\"400\" height=\"400\" />\n\n\n<a name=\"supported-tasks-and-leaderboards\"></a>", "# Supported Tasks and Leaderboards\n\nThis dataset supports a range of tasks including language modeling, text generation, and text augmentation.\nIt has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing.\nFurther information on leaderboards will be updated as they become available.\n\n<a name=\"languages\"></a>", "# Languages\n\nThe language of the data is primarily English.\n\n<a name=\"dataset-structure\"></a>", "# Dataset Structure\n\n<a name=\"data-instances\"></a>", "## Data Instances\n\nA data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.\nThe response is then entered into the response field.\n\n<a name=\"data-fields\"></a>", "## Data Fields\n\nThe fields are:\n1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.\n2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint\n3) 'question', representing a question entry as provided by the FLAN Collection\n4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.\n\n<a name=\"data-splits\"></a>", "## Data Splits\n\nThe data is unsplit.\n\n<a name=\"dataset-creation\"></a>", "# Dataset Creation\n\n<a name=\"curation-rationale\"></a>", "## Curation Rationale\n\nThe dataset was created to provide a source of augmented text data for researchers and developers.\nThe datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4.\nThis \"reasoning trace\" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on.\n\n<a name=\"source-data\"></a>", "## Source Data\n\nThe data is generated using techniques in alignment with the distributions outlined in the Orca paper, except as noted below:\n\n1) There is not enough CoT data in the FLAN Collection to generate 150K zero-shot entries, as the paper purports to use.\n We suspect this portion was either undocumented or misrepresented. We have used the ~75K points available.\n2) We used the pre-generated FLAN Collection datasets hosted on HuggingFace under conceptofmind, e.g. conceptofmind/flan2021.\n These are referenced by the official FLAN Collection repo as the preferred data source.\n However, these are a subset of the full FLAN Collection data, and have less than the required entries for the flan2021 and t0 submixes, by ~1.25M and 200k respectively.\n\nCombined, this gave us ~1.5M fewer datapoints than in the original Orca paper. Completing the set is an ongoing work.\n\n<a name=\"dataset-use\"></a>", "# Dataset Use\n\n<a name=\"use-cases\"></a>", "## Use Cases\n\nThe dataset can be used for tasks related to language understanding, natural language processing, machine learning model training, and model performance evaluation.\n\n<a name=\"usage-caveats\"></a>", "## Usage Caveats\n\nGiven that this is a work-in-progress dataset, it is recommended to regularly check for updates and improvements.\nFurther, the data should be used in accordance with the guidelines and recommendations outlined in the Orca paper.\n\n<a name=\"getting-started\"></a>", "## Getting Started\n\nThis dataset is organized such that it can be naively loaded via Hugging Face datasets library.\nWe recommend using streaming due to the large size of the files.\nRegular updates and data generation progress can be monitored through the OpenOrca repository on Hugging Face." ]
[ "TAGS\n#task_categories-conversational #task_categories-text-classification #task_categories-token-classification #task_categories-table-question-answering #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-summarization #task_categories-feature-extraction #task_categories-text-generation #task_categories-text2text-generation #size_categories-10M<n<100M #language-English #license-mit #arxiv-2306.02707 #arxiv-2301.13688 #region-us \n", "## Table of Contents\n- Dataset Summary\n- Dataset Attribution\n- Supported Tasks and Leaderboards\n- Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n- Dataset Use\n - Use Cases\n - Usage Caveats\n - Getting Started\n\n\n<p><h1> The OpenOrca Dataset! </h1></p>\n\n!OpenOrca Logo\n\n<a name=\"dataset-announcement\"></a>\n\nWe are thrilled to announce the release of the OpenOrca dataset!\nThis rich collection of augmented FLAN data aligns, as best as possible, with the distributions outlined in the Orca paper.\nIt has been instrumental in generating high-performing model checkpoints and serves as a valuable resource for all NLP researchers and developers!", "# Official Models", "## Mistral-7B-OpenOrca\n\nOur latest model, the first 7B to score better overall than all previous models below 30B.\n98% of Llama2-70b-chat's performance, in a completely open 7B!", "## OpenOrca-Platypus2-13B\n\nOur third model, the first 13B model to score higher than LLaMA1-65B on the HuggingFace Leaderboard!\nReleased in partnership with Platypus.", "## LlongOrca 7B & 13B\n\n* Our first 7B release, trained on top of LLongMA2 to achieve 16,000 tokens context. #1 long context 7B model at release time, with >99% of the overall #1 model's performance.\n* LlongOrca-13B-16k, trained on top of LLongMA2. #1 long context 13B model at release time, with >97% of the overall #1 model's performance.", "## OpenOrcaxOpenChat-Preview2-13B\n\nOur second model, highlighting that we've surpassed the performance reported in the Orca paper.\nWas #1 at release time, now surpassed by our own OpenOrca-Platypus2-13B.\nReleased in partnership with OpenChat.", "## OpenOrca-Preview1-13B\n\nOpenOrca-Preview1-13B\nThis model was trained in less than a day, for <$200, with <10% of our data.\nAt release, it beat the current state of the art models on BigBench-Hard and AGIEval. Achieves ~60% of the improvements reported in the Orca paper.\n\n<a name=\"dataset-summary\"></a>", "# Dataset Summary\n\nThe OpenOrca dataset is a collection of augmented FLAN Collection data.\nCurrently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.\nIt is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.\nThe data is primarily used for training and evaluation in the field of natural language processing.\n\n<a name=\"dataset-attribution\"></a>", "# Dataset Attribution\n\nWe would like to give special recognition to the following contributors for their significant efforts and dedication:\n \n\n Teknium \n WingLian/Caseus\n Eric Hartford\n NanoBit\n Pankaj\n Winddude\n Rohan\n\n URL:\n Autometa\n Entropi\n AtlasUnified\n NeverendingToast\n NanoBit\n WingLian/Caseus\n\nAlso of course, as always, TheBloke, for being the backbone of the whole community.\n\nMany thanks to NanoBit and Caseus, makers of Axolotl, for lending us their expertise on the platform that developed and trained manticore, minotaur, and many others! \n\nWe are welcoming sponsors or collaborators to help us build these models to the scale they deserve. Please reach out via our socials:\nURL URL\n\nWant to visualize our full dataset? Check out our Nomic Atlas Map.\n <img src=\"URL alt=\"Atlas Nomic Dataset Map\" width=\"400\" height=\"400\" />\n\n\n<a name=\"supported-tasks-and-leaderboards\"></a>", "# Supported Tasks and Leaderboards\n\nThis dataset supports a range of tasks including language modeling, text generation, and text augmentation.\nIt has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing.\nFurther information on leaderboards will be updated as they become available.\n\n<a name=\"languages\"></a>", "# Languages\n\nThe language of the data is primarily English.\n\n<a name=\"dataset-structure\"></a>", "# Dataset Structure\n\n<a name=\"data-instances\"></a>", "## Data Instances\n\nA data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.\nThe response is then entered into the response field.\n\n<a name=\"data-fields\"></a>", "## Data Fields\n\nThe fields are:\n1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.\n2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint\n3) 'question', representing a question entry as provided by the FLAN Collection\n4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.\n\n<a name=\"data-splits\"></a>", "## Data Splits\n\nThe data is unsplit.\n\n<a name=\"dataset-creation\"></a>", "# Dataset Creation\n\n<a name=\"curation-rationale\"></a>", "## Curation Rationale\n\nThe dataset was created to provide a source of augmented text data for researchers and developers.\nThe datapoints are intended primarily to provide an enhancement of the core FLAN Collection data which relies upon the detailed step by step reasoning capabilities of GPT-3.5 and GPT-4.\nThis \"reasoning trace\" augmentation has demonstrated exceptional results, allowing a LLaMA-13B model trained with this data to rival or beat GPT-3.5 on broad sets of hard reasoning tasks which all models below 100B parameters had previously performed dramatically worse on.\n\n<a name=\"source-data\"></a>", "## Source Data\n\nThe data is generated using techniques in alignment with the distributions outlined in the Orca paper, except as noted below:\n\n1) There is not enough CoT data in the FLAN Collection to generate 150K zero-shot entries, as the paper purports to use.\n We suspect this portion was either undocumented or misrepresented. We have used the ~75K points available.\n2) We used the pre-generated FLAN Collection datasets hosted on HuggingFace under conceptofmind, e.g. conceptofmind/flan2021.\n These are referenced by the official FLAN Collection repo as the preferred data source.\n However, these are a subset of the full FLAN Collection data, and have less than the required entries for the flan2021 and t0 submixes, by ~1.25M and 200k respectively.\n\nCombined, this gave us ~1.5M fewer datapoints than in the original Orca paper. Completing the set is an ongoing work.\n\n<a name=\"dataset-use\"></a>", "# Dataset Use\n\n<a name=\"use-cases\"></a>", "## Use Cases\n\nThe dataset can be used for tasks related to language understanding, natural language processing, machine learning model training, and model performance evaluation.\n\n<a name=\"usage-caveats\"></a>", "## Usage Caveats\n\nGiven that this is a work-in-progress dataset, it is recommended to regularly check for updates and improvements.\nFurther, the data should be used in accordance with the guidelines and recommendations outlined in the Orca paper.\n\n<a name=\"getting-started\"></a>", "## Getting Started\n\nThis dataset is organized such that it can be naively loaded via Hugging Face datasets library.\nWe recommend using streaming due to the large size of the files.\nRegular updates and data generation progress can be monitored through the OpenOrca repository on Hugging Face." ]
[ 162, 199, 4, 49, 48, 98, 67, 95, 122, 233, 86, 25, 19, 67, 153, 24, 18, 146, 235, 16, 46, 70, 66 ]
[ "passage: TAGS\n#task_categories-conversational #task_categories-text-classification #task_categories-token-classification #task_categories-table-question-answering #task_categories-question-answering #task_categories-zero-shot-classification #task_categories-summarization #task_categories-feature-extraction #task_categories-text-generation #task_categories-text2text-generation #size_categories-10M<n<100M #language-English #license-mit #arxiv-2306.02707 #arxiv-2301.13688 #region-us \n## Table of Contents\n- Dataset Summary\n- Dataset Attribution\n- Supported Tasks and Leaderboards\n- Languages\n- Dataset Structure\n - Data Instances\n - Data Fields\n - Data Splits\n- Dataset Creation\n - Curation Rationale\n - Source Data\n- Dataset Use\n - Use Cases\n - Usage Caveats\n - Getting Started\n\n\n<p><h1> The OpenOrca Dataset! </h1></p>\n\n!OpenOrca Logo\n\n<a name=\"dataset-announcement\"></a>\n\nWe are thrilled to announce the release of the OpenOrca dataset!\nThis rich collection of augmented FLAN data aligns, as best as possible, with the distributions outlined in the Orca paper.\nIt has been instrumental in generating high-performing model checkpoints and serves as a valuable resource for all NLP researchers and developers!# Official Models## Mistral-7B-OpenOrca\n\nOur latest model, the first 7B to score better overall than all previous models below 30B.\n98% of Llama2-70b-chat's performance, in a completely open 7B!## OpenOrca-Platypus2-13B\n\nOur third model, the first 13B model to score higher than LLaMA1-65B on the HuggingFace Leaderboard!\nReleased in partnership with Platypus.", "passage: ## LlongOrca 7B & 13B\n\n* Our first 7B release, trained on top of LLongMA2 to achieve 16,000 tokens context. #1 long context 7B model at release time, with >99% of the overall #1 model's performance.\n* LlongOrca-13B-16k, trained on top of LLongMA2. #1 long context 13B model at release time, with >97% of the overall #1 model's performance.## OpenOrcaxOpenChat-Preview2-13B\n\nOur second model, highlighting that we've surpassed the performance reported in the Orca paper.\nWas #1 at release time, now surpassed by our own OpenOrca-Platypus2-13B.\nReleased in partnership with OpenChat.## OpenOrca-Preview1-13B\n\nOpenOrca-Preview1-13B\nThis model was trained in less than a day, for <$200, with <10% of our data.\nAt release, it beat the current state of the art models on BigBench-Hard and AGIEval. Achieves ~60% of the improvements reported in the Orca paper.\n\n<a name=\"dataset-summary\"></a># Dataset Summary\n\nThe OpenOrca dataset is a collection of augmented FLAN Collection data.\nCurrently ~1M GPT-4 completions, and ~3.2M GPT-3.5 completions.\nIt is tabularized in alignment with the distributions presented in the ORCA paper and currently represents a partial completion of the full intended dataset, with ongoing generation to expand its scope.\nThe data is primarily used for training and evaluation in the field of natural language processing.\n\n<a name=\"dataset-attribution\"></a>", "passage: # Dataset Attribution\n\nWe would like to give special recognition to the following contributors for their significant efforts and dedication:\n \n\n Teknium \n WingLian/Caseus\n Eric Hartford\n NanoBit\n Pankaj\n Winddude\n Rohan\n\n URL:\n Autometa\n Entropi\n AtlasUnified\n NeverendingToast\n NanoBit\n WingLian/Caseus\n\nAlso of course, as always, TheBloke, for being the backbone of the whole community.\n\nMany thanks to NanoBit and Caseus, makers of Axolotl, for lending us their expertise on the platform that developed and trained manticore, minotaur, and many others! \n\nWe are welcoming sponsors or collaborators to help us build these models to the scale they deserve. Please reach out via our socials:\nURL URL\n\nWant to visualize our full dataset? Check out our Nomic Atlas Map.\n <img src=\"URL alt=\"Atlas Nomic Dataset Map\" width=\"400\" height=\"400\" />\n\n\n<a name=\"supported-tasks-and-leaderboards\"></a># Supported Tasks and Leaderboards\n\nThis dataset supports a range of tasks including language modeling, text generation, and text augmentation.\nIt has been instrumental in the generation of multiple high-performing model checkpoints which have exhibited exceptional performance in our unit testing.\nFurther information on leaderboards will be updated as they become available.\n\n<a name=\"languages\"></a># Languages\n\nThe language of the data is primarily English.\n\n<a name=\"dataset-structure\"></a># Dataset Structure\n\n<a name=\"data-instances\"></a>## Data Instances\n\nA data instance in this dataset represents entries from the FLAN collection which have been augmented by submitting the listed question to either GPT-4 or GPT-3.5.\nThe response is then entered into the response field.\n\n<a name=\"data-fields\"></a>## Data Fields\n\nThe fields are:\n1) 'id', a unique numbered identifier which includes one of 'niv', 't0', 'cot', or 'flan' to represent which source FLAN Collection submix the 'question' is sourced from.\n2) 'system_prompt', representing the System Prompt presented to the GPT-3.5 or GPT-4 API for the datapoint\n3) 'question', representing a question entry as provided by the FLAN Collection\n4) 'response', a response to that question received from a query to either GPT-3.5 or GPT-4.\n\n<a name=\"data-splits\"></a>## Data Splits\n\nThe data is unsplit.\n\n<a name=\"dataset-creation\"></a># Dataset Creation\n\n<a name=\"curation-rationale\"></a>" ]
ba709f73806a9657ad1ffc4f0dda99b10d27b04e
### Audio dataset from Library "Milutin Bojic" digital repository This dataset is created from multimedia digital collection, using srt files to spit audio. The content of dataset is material that our member, Mihailo Miljkovic dictated in recorder his memories from his very interesting life. Great thanks go to the great guys from [CLASSLA - CLARIN Knowledge Centre for South Slavic Languages](https://huggingface.co/classla) [Nikola Ljubesic](https://huggingface.co/nljubesi) and [Peter Rupnik](https://huggingface.co/5roop) on help to adapt the code for HF publishing!
Sagicc/audio-lmb-ds
[ "size_categories:1K<n<10K", "language:sr", "region:us" ]
2023-12-27T17:25:14+00:00
{"language": ["sr"], "size_categories": ["1K<n<10K"], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "transcript", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 529807303.082, "num_examples": 2493}], "download_size": 759351337, "dataset_size": 529807303.082}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-12-29T16:09:20+00:00
[]
[ "sr" ]
TAGS #size_categories-1K<n<10K #language-Serbian #region-us
### Audio dataset from Library "Milutin Bojic" digital repository This dataset is created from multimedia digital collection, using srt files to spit audio. The content of dataset is material that our member, Mihailo Miljkovic dictated in recorder his memories from his very interesting life. Great thanks go to the great guys from CLASSLA - CLARIN Knowledge Centre for South Slavic Languages Nikola Ljubesic and Peter Rupnik on help to adapt the code for HF publishing!
[ "### Audio dataset from Library \"Milutin Bojic\" digital repository\n\nThis dataset is created from multimedia digital collection, using srt files to spit audio. The content of dataset is material that our member, Mihailo Miljkovic dictated in recorder his memories from his very interesting life.\n\nGreat thanks go to the great guys from CLASSLA - CLARIN Knowledge Centre for South Slavic Languages\n\nNikola Ljubesic and Peter Rupnik on help to adapt the code for HF publishing!" ]
[ "TAGS\n#size_categories-1K<n<10K #language-Serbian #region-us \n", "### Audio dataset from Library \"Milutin Bojic\" digital repository\n\nThis dataset is created from multimedia digital collection, using srt files to spit audio. The content of dataset is material that our member, Mihailo Miljkovic dictated in recorder his memories from his very interesting life.\n\nGreat thanks go to the great guys from CLASSLA - CLARIN Knowledge Centre for South Slavic Languages\n\nNikola Ljubesic and Peter Rupnik on help to adapt the code for HF publishing!" ]
[ 23, 108 ]
[ "passage: TAGS\n#size_categories-1K<n<10K #language-Serbian #region-us \n### Audio dataset from Library \"Milutin Bojic\" digital repository\n\nThis dataset is created from multimedia digital collection, using srt files to spit audio. The content of dataset is material that our member, Mihailo Miljkovic dictated in recorder his memories from his very interesting life.\n\nGreat thanks go to the great guys from CLASSLA - CLARIN Knowledge Centre for South Slavic Languages\n\nNikola Ljubesic and Peter Rupnik on help to adapt the code for HF publishing!" ]
65d41436a9347f9fc4feb8f1a93d9efd8e60abe4
The archive contains a comfyuui-portable with uploaded nods. ComfyUI is great, however you have to download many different add-ons. You can access my kit here. Links to WF running on this kit: https://civitai.com/articles/3451/deep-cache-lcm-and-sdxl-is-so-fast https://civitai.com/articles/3517/improved-face-generation https://civitai.com/user/Aderek514/articles You have to change file extra_model_paths.yaml to yours setting, because there are mine "base_path: C:/Users/Aderek/stable-diffusion-webui/" How it loaded after unpack on my PC: l:\ComfyUI_windows_portable>.\python_embeded\python.exe -s ComfyUI\main.py --windows-standalone-build ** ComfyUI startup time: 2023-12-28 08:56:33.501942 ** Platform: Windows ** Python version: 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] ** Python executable: l:\ComfyUI_windows_portable\python_embeded\python.exe ** Log path: l:\ComfyUI_windows_portable\comfyui.log Prestartup times for custom nodes: 0.0 seconds: L:\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy 0.3 seconds: L:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager Total VRAM 8191 MB, total RAM 32649 MB xformers version: 0.0.23.post1+cu118 Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 3060 Ti : cudaMallocAsync VAE dtype: torch.bfloat16 Using xformers cross attention
Aderek514/comfyui
[ "license:other", "region:us" ]
2023-12-27T17:35:27+00:00
{"license": "other", "license_name": "comfyui", "license_link": "LICENSE"}
2023-12-28T08:00:55+00:00
[]
[]
TAGS #license-other #region-us
The archive contains a comfyuui-portable with uploaded nods. ComfyUI is great, however you have to download many different add-ons. You can access my kit here. Links to WF running on this kit: URL URL URL You have to change file extra_model_paths.yaml to yours setting, because there are mine "base_path: C:/Users/Aderek/stable-diffusion-webui/" How it loaded after unpack on my PC: l:\ComfyUI_windows_portable>.\python_embeded\URL -s ComfyUI\URL --windows-standalone-build ComfyUI startup time: 2023-12-28 08:56:33.501942 Platform: Windows Python version: 3.10.9 (tags/v3.10.9:1dd9be6, Dec 6 2022, 20:01:21) [MSC v.1934 64 bit (AMD64)] Python executable: l:\ComfyUI_windows_portable\python_embeded\URL Log path: l:\ComfyUI_windows_portable\URL Prestartup times for custom nodes: 0.0 seconds: L:\ComfyUI_windows_portable\ComfyUI\custom_nodes\rgthree-comfy 0.3 seconds: L:\ComfyUI_windows_portable\ComfyUI\custom_nodes\ComfyUI-Manager Total VRAM 8191 MB, total RAM 32649 MB xformers version: 0.0.23.post1+cu118 Set vram state to: NORMAL_VRAM Device: cuda:0 NVIDIA GeForce RTX 3060 Ti : cudaMallocAsync VAE dtype: torch.bfloat16 Using xformers cross attention
[]
[ "TAGS\n#license-other #region-us \n" ]
[ 11 ]
[ "passage: TAGS\n#license-other #region-us \n" ]
d48d3028e44f4f707559ae1f3c2cdcc6e318a1d5
Data for finetuning a model for generating BP code --- task_categories: - text-generation language: - en tags: - code ---
ronzi10/NLP4BP
[ "region:us" ]
2023-12-27T17:36:31+00:00
{}
2023-12-27T17:38:46+00:00
[]
[]
TAGS #region-us
Data for finetuning a model for generating BP code --- task_categories: - text-generation language: - en tags: - code ---
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
08b7b58f5cc9da28be12602dd5f6cdce84165d0a
# Dataset Card for "code_search_net_python_func_names" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
hynky/code_search_net_python_func_names
[ "region:us" ]
2023-12-27T18:55:37+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "source_code", "dtype": "string"}, {"name": "function_name", "dtype": "string"}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 524033123, "num_examples": 405813}, {"name": "test", "num_bytes": 3145102, "num_examples": 2000}, {"name": "validation", "num_bytes": 2819992, "num_examples": 2000}], "download_size": 180129912, "dataset_size": 529998217}}
2023-12-27T18:56:07+00:00
[]
[]
TAGS #region-us
# Dataset Card for "code_search_net_python_func_names" More Information needed
[ "# Dataset Card for \"code_search_net_python_func_names\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"code_search_net_python_func_names\"\n\nMore Information needed" ]
[ 6, 24 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"code_search_net_python_func_names\"\n\nMore Information needed" ]
3a6cf6003c870d1f5608cfe76b998157c7164558
# Dataset of Tooru This is the dataset of Tooru, containing 585 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------| | raw | 585 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 1481 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | raw-stage3-eyes | 1720 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. | | 384x512 | 585 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x704 | 585 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x880 | 585 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 1481 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 1481 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-p512-640 | 1209 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. | | stage3-eyes-640 | 1720 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. | | stage3-eyes-800 | 1720 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
CyberHarem/tooru_kobayashisanchinomaidragon
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-12-27T19:09:19+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-12-27T19:13:55+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of Tooru ================ This is the dataset of Tooru, containing 585 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
e86cbf98aa372f68d0478b8726f0fdf174b061cd
# Dataset Card for Evaluation run of saberai/Zro1.5_3B <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [saberai/Zro1.5_3B](https://huggingface.co/saberai/Zro1.5_3B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_saberai__Zro1.5_3B", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-27T19:33:43.363454](https://huggingface.co/datasets/open-llm-leaderboard/details_saberai__Zro1.5_3B/blob/main/results_2023-12-27T19-33-43.363454.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.2628913556214231, "acc_stderr": 0.031108716303916813, "acc_norm": 0.2632892835008201, "acc_norm_stderr": 0.03179345445075825, "mc1": 0.2386780905752754, "mc1_stderr": 0.014922629695456418, "mc2": 0.36891896664444634, "mc2_stderr": 0.01421300651619945 }, "harness|arc:challenge|25": { "acc": 0.3216723549488055, "acc_stderr": 0.013650488084494166, "acc_norm": 0.35921501706484643, "acc_norm_stderr": 0.014020224155839152 }, "harness|hellaswag|10": { "acc": 0.4644493128858793, "acc_stderr": 0.0049771527464785885, "acc_norm": 0.6111332403903604, "acc_norm_stderr": 0.004864966792310698 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.27, "acc_stderr": 0.044619604333847415, "acc_norm": 0.27, "acc_norm_stderr": 0.044619604333847415 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.3333333333333333, "acc_stderr": 0.04072314811876837, "acc_norm": 0.3333333333333333, "acc_norm_stderr": 0.04072314811876837 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.21052631578947367, "acc_stderr": 0.03317672787533157, "acc_norm": 0.21052631578947367, "acc_norm_stderr": 0.03317672787533157 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.26, "acc_stderr": 0.04408440022768079, "acc_norm": 0.26, "acc_norm_stderr": 0.04408440022768079 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.2792452830188679, "acc_stderr": 0.027611163402399715, "acc_norm": 0.2792452830188679, "acc_norm_stderr": 0.027611163402399715 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.25, "acc_stderr": 0.03621034121889507, "acc_norm": 0.25, "acc_norm_stderr": 0.03621034121889507 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.23, "acc_stderr": 0.04229525846816506, "acc_norm": 0.23, "acc_norm_stderr": 0.04229525846816506 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.31, "acc_stderr": 0.04648231987117316, "acc_norm": 0.31, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.32, "acc_stderr": 0.046882617226215034, "acc_norm": 0.32, "acc_norm_stderr": 0.046882617226215034 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.21965317919075145, "acc_stderr": 0.031568093627031744, "acc_norm": 0.21965317919075145, "acc_norm_stderr": 0.031568093627031744 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.17647058823529413, "acc_stderr": 0.03793281185307811, "acc_norm": 0.17647058823529413, "acc_norm_stderr": 0.03793281185307811 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.29, "acc_stderr": 0.045604802157206845, "acc_norm": 0.29, "acc_norm_stderr": 0.045604802157206845 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.2978723404255319, "acc_stderr": 0.029896145682095462, "acc_norm": 0.2978723404255319, "acc_norm_stderr": 0.029896145682095462 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.21052631578947367, "acc_stderr": 0.0383515395439942, "acc_norm": 0.21052631578947367, "acc_norm_stderr": 0.0383515395439942 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.2620689655172414, "acc_stderr": 0.036646663372252565, "acc_norm": 0.2620689655172414, "acc_norm_stderr": 0.036646663372252565 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.25925925925925924, "acc_stderr": 0.02256989707491842, "acc_norm": 0.25925925925925924, "acc_norm_stderr": 0.02256989707491842 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.2222222222222222, "acc_stderr": 0.037184890068181146, "acc_norm": 0.2222222222222222, "acc_norm_stderr": 0.037184890068181146 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.29, "acc_stderr": 0.04560480215720684, "acc_norm": 0.29, "acc_norm_stderr": 0.04560480215720684 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.25806451612903225, "acc_stderr": 0.024892469172462843, "acc_norm": 0.25806451612903225, "acc_norm_stderr": 0.024892469172462843 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.270935960591133, "acc_stderr": 0.031270907132976984, "acc_norm": 0.270935960591133, "acc_norm_stderr": 0.031270907132976984 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.19, "acc_stderr": 0.039427724440366234, "acc_norm": 0.19, "acc_norm_stderr": 0.039427724440366234 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.2727272727272727, "acc_stderr": 0.03477691162163659, "acc_norm": 0.2727272727272727, "acc_norm_stderr": 0.03477691162163659 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.26262626262626265, "acc_stderr": 0.03135305009533085, "acc_norm": 0.26262626262626265, "acc_norm_stderr": 0.03135305009533085 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.21761658031088082, "acc_stderr": 0.029778663037752947, "acc_norm": 0.21761658031088082, "acc_norm_stderr": 0.029778663037752947 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.2358974358974359, "acc_stderr": 0.021525965407408726, "acc_norm": 0.2358974358974359, "acc_norm_stderr": 0.021525965407408726 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.24814814814814815, "acc_stderr": 0.0263357394040558, "acc_norm": 0.24814814814814815, "acc_norm_stderr": 0.0263357394040558 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.23949579831932774, "acc_stderr": 0.02772206549336127, "acc_norm": 0.23949579831932774, "acc_norm_stderr": 0.02772206549336127 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.2781456953642384, "acc_stderr": 0.03658603262763743, "acc_norm": 0.2781456953642384, "acc_norm_stderr": 0.03658603262763743 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.24954128440366974, "acc_stderr": 0.018553897629501624, "acc_norm": 0.24954128440366974, "acc_norm_stderr": 0.018553897629501624 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.23148148148148148, "acc_stderr": 0.028765111718046955, "acc_norm": 0.23148148148148148, "acc_norm_stderr": 0.028765111718046955 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.2647058823529412, "acc_stderr": 0.03096451792692339, "acc_norm": 0.2647058823529412, "acc_norm_stderr": 0.03096451792692339 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.270042194092827, "acc_stderr": 0.028900721906293433, "acc_norm": 0.270042194092827, "acc_norm_stderr": 0.028900721906293433 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.30493273542600896, "acc_stderr": 0.030898610882477515, "acc_norm": 0.30493273542600896, "acc_norm_stderr": 0.030898610882477515 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.24427480916030533, "acc_stderr": 0.03768335959728745, "acc_norm": 0.24427480916030533, "acc_norm_stderr": 0.03768335959728745 }, "harness|hendrycksTest-international_law|5": { "acc": 0.3140495867768595, "acc_stderr": 0.04236964753041018, "acc_norm": 0.3140495867768595, "acc_norm_stderr": 0.04236964753041018 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.25, "acc_stderr": 0.04186091791394607, "acc_norm": 0.25, "acc_norm_stderr": 0.04186091791394607 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.3006134969325153, "acc_stderr": 0.03602511318806771, "acc_norm": 0.3006134969325153, "acc_norm_stderr": 0.03602511318806771 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.3125, "acc_stderr": 0.043994650575715215, "acc_norm": 0.3125, "acc_norm_stderr": 0.043994650575715215 }, "harness|hendrycksTest-management|5": { "acc": 0.24271844660194175, "acc_stderr": 0.04245022486384493, "acc_norm": 0.24271844660194175, "acc_norm_stderr": 0.04245022486384493 }, "harness|hendrycksTest-marketing|5": { "acc": 0.23076923076923078, "acc_stderr": 0.027601921381417593, "acc_norm": 0.23076923076923078, "acc_norm_stderr": 0.027601921381417593 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.21, "acc_stderr": 0.04093601807403326, "acc_norm": 0.21, "acc_norm_stderr": 0.04093601807403326 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.26181353767560667, "acc_stderr": 0.01572083867844526, "acc_norm": 0.26181353767560667, "acc_norm_stderr": 0.01572083867844526 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.23410404624277456, "acc_stderr": 0.022797110278071138, "acc_norm": 0.23410404624277456, "acc_norm_stderr": 0.022797110278071138 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.2424581005586592, "acc_stderr": 0.014333522059217889, "acc_norm": 0.2424581005586592, "acc_norm_stderr": 0.014333522059217889 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.2647058823529412, "acc_stderr": 0.02526169121972948, "acc_norm": 0.2647058823529412, "acc_norm_stderr": 0.02526169121972948 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.27009646302250806, "acc_stderr": 0.025218040373410626, "acc_norm": 0.27009646302250806, "acc_norm_stderr": 0.025218040373410626 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.29012345679012347, "acc_stderr": 0.025251173936495022, "acc_norm": 0.29012345679012347, "acc_norm_stderr": 0.025251173936495022 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.24113475177304963, "acc_stderr": 0.02551873104953777, "acc_norm": 0.24113475177304963, "acc_norm_stderr": 0.02551873104953777 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.242503259452412, "acc_stderr": 0.01094657096634877, "acc_norm": 0.242503259452412, "acc_norm_stderr": 0.01094657096634877 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.1948529411764706, "acc_stderr": 0.024060599423487414, "acc_norm": 0.1948529411764706, "acc_norm_stderr": 0.024060599423487414 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.24673202614379086, "acc_stderr": 0.017440820367402503, "acc_norm": 0.24673202614379086, "acc_norm_stderr": 0.017440820367402503 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.2636363636363636, "acc_stderr": 0.04220224692971987, "acc_norm": 0.2636363636363636, "acc_norm_stderr": 0.04220224692971987 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.18775510204081633, "acc_stderr": 0.02500025603954621, "acc_norm": 0.18775510204081633, "acc_norm_stderr": 0.02500025603954621 }, "harness|hendrycksTest-sociology|5": { "acc": 0.22885572139303484, "acc_stderr": 0.029705284056772436, "acc_norm": 0.22885572139303484, "acc_norm_stderr": 0.029705284056772436 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.21, "acc_stderr": 0.040936018074033256, "acc_norm": 0.21, "acc_norm_stderr": 0.040936018074033256 }, "harness|hendrycksTest-virology|5": { "acc": 0.3313253012048193, "acc_stderr": 0.03664314777288086, "acc_norm": 0.3313253012048193, "acc_norm_stderr": 0.03664314777288086 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.2631578947368421, "acc_stderr": 0.03377310252209196, "acc_norm": 0.2631578947368421, "acc_norm_stderr": 0.03377310252209196 }, "harness|truthfulqa:mc|0": { "mc1": 0.2386780905752754, "mc1_stderr": 0.014922629695456418, "mc2": 0.36891896664444634, "mc2_stderr": 0.01421300651619945 }, "harness|winogrande|5": { "acc": 0.5872138910812944, "acc_stderr": 0.013837060648682105 }, "harness|gsm8k|5": { "acc": 0.09931766489764973, "acc_stderr": 0.008238371412683984 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_saberai__Zro1.5_3B
[ "region:us" ]
2023-12-27T19:35:24+00:00
{"pretty_name": "Evaluation run of saberai/Zro1.5_3B", "dataset_summary": "Dataset automatically created during the evaluation run of model [saberai/Zro1.5_3B](https://huggingface.co/saberai/Zro1.5_3B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_saberai__Zro1.5_3B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-27T19:33:43.363454](https://huggingface.co/datasets/open-llm-leaderboard/details_saberai__Zro1.5_3B/blob/main/results_2023-12-27T19-33-43.363454.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.2628913556214231,\n \"acc_stderr\": 0.031108716303916813,\n \"acc_norm\": 0.2632892835008201,\n \"acc_norm_stderr\": 0.03179345445075825,\n \"mc1\": 0.2386780905752754,\n \"mc1_stderr\": 0.014922629695456418,\n \"mc2\": 0.36891896664444634,\n \"mc2_stderr\": 0.01421300651619945\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.3216723549488055,\n \"acc_stderr\": 0.013650488084494166,\n \"acc_norm\": 0.35921501706484643,\n \"acc_norm_stderr\": 0.014020224155839152\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.4644493128858793,\n \"acc_stderr\": 0.0049771527464785885,\n \"acc_norm\": 0.6111332403903604,\n \"acc_norm_stderr\": 0.004864966792310698\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.044619604333847415,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.044619604333847415\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.3333333333333333,\n \"acc_stderr\": 0.04072314811876837,\n \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.04072314811876837\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.21052631578947367,\n \"acc_stderr\": 0.03317672787533157,\n \"acc_norm\": 0.21052631578947367,\n \"acc_norm_stderr\": 0.03317672787533157\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.04408440022768079,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.04408440022768079\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.2792452830188679,\n \"acc_stderr\": 0.027611163402399715,\n \"acc_norm\": 0.2792452830188679,\n \"acc_norm_stderr\": 0.027611163402399715\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.03621034121889507,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.03621034121889507\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.23,\n \"acc_stderr\": 0.04229525846816506,\n \"acc_norm\": 0.23,\n \"acc_norm_stderr\": 0.04229525846816506\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.21965317919075145,\n \"acc_stderr\": 0.031568093627031744,\n \"acc_norm\": 0.21965317919075145,\n \"acc_norm_stderr\": 0.031568093627031744\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.17647058823529413,\n \"acc_stderr\": 0.03793281185307811,\n \"acc_norm\": 0.17647058823529413,\n \"acc_norm_stderr\": 0.03793281185307811\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.2978723404255319,\n \"acc_stderr\": 0.029896145682095462,\n \"acc_norm\": 0.2978723404255319,\n \"acc_norm_stderr\": 0.029896145682095462\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.21052631578947367,\n \"acc_stderr\": 0.0383515395439942,\n \"acc_norm\": 0.21052631578947367,\n \"acc_norm_stderr\": 0.0383515395439942\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.2620689655172414,\n \"acc_stderr\": 0.036646663372252565,\n \"acc_norm\": 0.2620689655172414,\n \"acc_norm_stderr\": 0.036646663372252565\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.25925925925925924,\n \"acc_stderr\": 0.02256989707491842,\n \"acc_norm\": 0.25925925925925924,\n \"acc_norm_stderr\": 0.02256989707491842\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.2222222222222222,\n \"acc_stderr\": 0.037184890068181146,\n \"acc_norm\": 0.2222222222222222,\n \"acc_norm_stderr\": 0.037184890068181146\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.04560480215720684,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.04560480215720684\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.25806451612903225,\n \"acc_stderr\": 0.024892469172462843,\n \"acc_norm\": 0.25806451612903225,\n \"acc_norm_stderr\": 0.024892469172462843\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.270935960591133,\n \"acc_stderr\": 0.031270907132976984,\n \"acc_norm\": 0.270935960591133,\n \"acc_norm_stderr\": 0.031270907132976984\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.19,\n \"acc_stderr\": 0.039427724440366234,\n \"acc_norm\": 0.19,\n \"acc_norm_stderr\": 0.039427724440366234\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.2727272727272727,\n \"acc_stderr\": 0.03477691162163659,\n \"acc_norm\": 0.2727272727272727,\n \"acc_norm_stderr\": 0.03477691162163659\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.26262626262626265,\n \"acc_stderr\": 0.03135305009533085,\n \"acc_norm\": 0.26262626262626265,\n \"acc_norm_stderr\": 0.03135305009533085\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.21761658031088082,\n \"acc_stderr\": 0.029778663037752947,\n \"acc_norm\": 0.21761658031088082,\n \"acc_norm_stderr\": 0.029778663037752947\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.2358974358974359,\n \"acc_stderr\": 0.021525965407408726,\n \"acc_norm\": 0.2358974358974359,\n \"acc_norm_stderr\": 0.021525965407408726\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.24814814814814815,\n \"acc_stderr\": 0.0263357394040558,\n \"acc_norm\": 0.24814814814814815,\n \"acc_norm_stderr\": 0.0263357394040558\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.23949579831932774,\n \"acc_stderr\": 0.02772206549336127,\n \"acc_norm\": 0.23949579831932774,\n \"acc_norm_stderr\": 0.02772206549336127\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.2781456953642384,\n \"acc_stderr\": 0.03658603262763743,\n \"acc_norm\": 0.2781456953642384,\n \"acc_norm_stderr\": 0.03658603262763743\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.24954128440366974,\n \"acc_stderr\": 0.018553897629501624,\n \"acc_norm\": 0.24954128440366974,\n \"acc_norm_stderr\": 0.018553897629501624\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.23148148148148148,\n \"acc_stderr\": 0.028765111718046955,\n \"acc_norm\": 0.23148148148148148,\n \"acc_norm_stderr\": 0.028765111718046955\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.2647058823529412,\n \"acc_stderr\": 0.03096451792692339,\n \"acc_norm\": 0.2647058823529412,\n \"acc_norm_stderr\": 0.03096451792692339\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.270042194092827,\n \"acc_stderr\": 0.028900721906293433,\n \"acc_norm\": 0.270042194092827,\n \"acc_norm_stderr\": 0.028900721906293433\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.30493273542600896,\n \"acc_stderr\": 0.030898610882477515,\n \"acc_norm\": 0.30493273542600896,\n \"acc_norm_stderr\": 0.030898610882477515\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.24427480916030533,\n \"acc_stderr\": 0.03768335959728745,\n \"acc_norm\": 0.24427480916030533,\n \"acc_norm_stderr\": 0.03768335959728745\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.3140495867768595,\n \"acc_stderr\": 0.04236964753041018,\n \"acc_norm\": 0.3140495867768595,\n \"acc_norm_stderr\": 0.04236964753041018\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04186091791394607,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04186091791394607\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.3006134969325153,\n \"acc_stderr\": 0.03602511318806771,\n \"acc_norm\": 0.3006134969325153,\n \"acc_norm_stderr\": 0.03602511318806771\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.3125,\n \"acc_stderr\": 0.043994650575715215,\n \"acc_norm\": 0.3125,\n \"acc_norm_stderr\": 0.043994650575715215\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.24271844660194175,\n \"acc_stderr\": 0.04245022486384493,\n \"acc_norm\": 0.24271844660194175,\n \"acc_norm_stderr\": 0.04245022486384493\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.23076923076923078,\n \"acc_stderr\": 0.027601921381417593,\n \"acc_norm\": 0.23076923076923078,\n \"acc_norm_stderr\": 0.027601921381417593\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.21,\n \"acc_stderr\": 0.04093601807403326,\n \"acc_norm\": 0.21,\n \"acc_norm_stderr\": 0.04093601807403326\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.26181353767560667,\n \"acc_stderr\": 0.01572083867844526,\n \"acc_norm\": 0.26181353767560667,\n \"acc_norm_stderr\": 0.01572083867844526\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.23410404624277456,\n \"acc_stderr\": 0.022797110278071138,\n \"acc_norm\": 0.23410404624277456,\n \"acc_norm_stderr\": 0.022797110278071138\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2424581005586592,\n \"acc_stderr\": 0.014333522059217889,\n \"acc_norm\": 0.2424581005586592,\n \"acc_norm_stderr\": 0.014333522059217889\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.2647058823529412,\n \"acc_stderr\": 0.02526169121972948,\n \"acc_norm\": 0.2647058823529412,\n \"acc_norm_stderr\": 0.02526169121972948\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.27009646302250806,\n \"acc_stderr\": 0.025218040373410626,\n \"acc_norm\": 0.27009646302250806,\n \"acc_norm_stderr\": 0.025218040373410626\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.29012345679012347,\n \"acc_stderr\": 0.025251173936495022,\n \"acc_norm\": 0.29012345679012347,\n \"acc_norm_stderr\": 0.025251173936495022\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.24113475177304963,\n \"acc_stderr\": 0.02551873104953777,\n \"acc_norm\": 0.24113475177304963,\n \"acc_norm_stderr\": 0.02551873104953777\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.242503259452412,\n \"acc_stderr\": 0.01094657096634877,\n \"acc_norm\": 0.242503259452412,\n \"acc_norm_stderr\": 0.01094657096634877\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.1948529411764706,\n \"acc_stderr\": 0.024060599423487414,\n \"acc_norm\": 0.1948529411764706,\n \"acc_norm_stderr\": 0.024060599423487414\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.24673202614379086,\n \"acc_stderr\": 0.017440820367402503,\n \"acc_norm\": 0.24673202614379086,\n \"acc_norm_stderr\": 0.017440820367402503\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.2636363636363636,\n \"acc_stderr\": 0.04220224692971987,\n \"acc_norm\": 0.2636363636363636,\n \"acc_norm_stderr\": 0.04220224692971987\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.18775510204081633,\n \"acc_stderr\": 0.02500025603954621,\n \"acc_norm\": 0.18775510204081633,\n \"acc_norm_stderr\": 0.02500025603954621\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.22885572139303484,\n \"acc_stderr\": 0.029705284056772436,\n \"acc_norm\": 0.22885572139303484,\n \"acc_norm_stderr\": 0.029705284056772436\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.21,\n \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\": 0.21,\n \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.3313253012048193,\n \"acc_stderr\": 0.03664314777288086,\n \"acc_norm\": 0.3313253012048193,\n \"acc_norm_stderr\": 0.03664314777288086\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.2631578947368421,\n \"acc_stderr\": 0.03377310252209196,\n \"acc_norm\": 0.2631578947368421,\n \"acc_norm_stderr\": 0.03377310252209196\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2386780905752754,\n \"mc1_stderr\": 0.014922629695456418,\n \"mc2\": 0.36891896664444634,\n \"mc2_stderr\": 0.01421300651619945\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5872138910812944,\n \"acc_stderr\": 0.013837060648682105\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.09931766489764973,\n \"acc_stderr\": 0.008238371412683984\n }\n}\n```", "repo_url": "https://huggingface.co/saberai/Zro1.5_3B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|arc:challenge|25_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|gsm8k|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hellaswag|10_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-27T19-33-43.363454.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["**/details_harness|winogrande|5_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-27T19-33-43.363454.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_27T19_33_43.363454", "path": ["results_2023-12-27T19-33-43.363454.parquet"]}, {"split": "latest", "path": ["results_2023-12-27T19-33-43.363454.parquet"]}]}]}
2023-12-27T19:35:44+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of saberai/Zro1.5_3B Dataset automatically created during the evaluation run of model saberai/Zro1.5_3B on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-27T19:33:43.363454(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of saberai/Zro1.5_3B\n\n\n\nDataset automatically created during the evaluation run of model saberai/Zro1.5_3B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-27T19:33:43.363454(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of saberai/Zro1.5_3B\n\n\n\nDataset automatically created during the evaluation run of model saberai/Zro1.5_3B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-27T19:33:43.363454(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 179, 66, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of saberai/Zro1.5_3B\n\n\n\nDataset automatically created during the evaluation run of model saberai/Zro1.5_3B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-27T19:33:43.363454(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
cac9719ccf9b2b9506da7a3c5da225617a928e46
# Dataset of Kanna Kamui This is the dataset of Kanna Kamui, containing 363 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------| | raw | 363 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 803 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | raw-stage3-eyes | 892 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. | | 384x512 | 363 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x704 | 363 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x880 | 363 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 803 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 803 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-p512-640 | 603 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. | | stage3-eyes-640 | 892 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. | | stage3-eyes-800 | 892 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
CyberHarem/kanna_kamui_kobayashisanchinomaidragon
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-12-27T19:45:23+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-12-27T19:47:56+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of Kanna Kamui ====================== This is the dataset of Kanna Kamui, containing 363 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
bec91a4ba73bd80d5f67617461cf93ee8f962294
The dataset is derived from [mounikaiiith/Telugu_Emotion](https://huggingface.co/datasets/mounikaiiith/Telugu_Emotion) and has been transliterated using [ai4bharat-transliteration](https://pypi.org/project/ai4bharat-transliteration/) and [Indic Trans](https://ai4bharat.iitm.ac.in/indic-trans2/).
eswardivi/transliter_telugu_reviews
[ "region:us" ]
2023-12-27T19:46:33+00:00
{"dataset_info": {"features": [{"name": "Sentence", "dtype": "string"}, {"name": "IndicXlit", "dtype": "string"}, {"name": "IndicTrans", "dtype": "string"}, {"name": "Hybrid", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1303856, "num_examples": 2460}], "download_size": 778266, "dataset_size": 1303856}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-01-25T10:31:35+00:00
[]
[]
TAGS #region-us
The dataset is derived from mounikaiiith/Telugu_Emotion and has been transliterated using ai4bharat-transliteration and Indic Trans.
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
2cea456a4def2cd5cd9b61cb2d4d65ef8d7f2a0b
OpenAssistant TOP-1 Conversation Threads Guanacco style export of the best conversation threads from the open-assistant.io database exported August 25, 2023 jsonl files with chatml formatted conversations train: 4,295 samples Only Spanish examples Add colum to count number of messages
dbuos/oasst_top1_es
[ "language:es", "license:apache-2.0", "region:us" ]
2023-12-27T20:08:16+00:00
{"language": ["es"], "license": "apache-2.0", "pretty_name": "a", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "lang", "dtype": "string"}, {"name": "num_turns", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 6302055, "num_examples": 4295}], "download_size": 3386782, "dataset_size": 6302055}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-12-27T20:13:11+00:00
[]
[ "es" ]
TAGS #language-Spanish #license-apache-2.0 #region-us
OpenAssistant TOP-1 Conversation Threads Guanacco style export of the best conversation threads from the URL database exported August 25, 2023 jsonl files with chatml formatted conversations train: 4,295 samples Only Spanish examples Add colum to count number of messages
[]
[ "TAGS\n#language-Spanish #license-apache-2.0 #region-us \n" ]
[ 19 ]
[ "passage: TAGS\n#language-Spanish #license-apache-2.0 #region-us \n" ]
f4487bcf5a2cfc17ae69ea7fa095e0dd6090b727
#### OpenAssistant TOP-1 Conversation Threads ##### Guanacco style export of the best conversation threads from the open-assistant.io database - exported August 25, 2023 - jsonl files with chatml formatted conversations - train: 5,023 samples - Only English examples - Add column to count number of messages
dbuos/oasst_top1_en
[ "size_categories:1K<n<10K", "language:en", "license:apache-2.0", "region:us" ]
2023-12-27T20:16:05+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "lang", "dtype": "string"}, {"name": "num_turns", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 9908776, "num_examples": 5023}], "download_size": 5271098, "dataset_size": 9908776}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-12-27T20:21:21+00:00
[]
[ "en" ]
TAGS #size_categories-1K<n<10K #language-English #license-apache-2.0 #region-us
#### OpenAssistant TOP-1 Conversation Threads ##### Guanacco style export of the best conversation threads from the URL database - exported August 25, 2023 - jsonl files with chatml formatted conversations - train: 5,023 samples - Only English examples - Add column to count number of messages
[ "#### OpenAssistant TOP-1 Conversation Threads", "##### Guanacco style export of the best conversation threads from the URL database\n- exported August 25, 2023\n- jsonl files with chatml formatted conversations\n- train: 5,023 samples\n- Only English examples\n- Add column to count number of messages" ]
[ "TAGS\n#size_categories-1K<n<10K #language-English #license-apache-2.0 #region-us \n", "#### OpenAssistant TOP-1 Conversation Threads", "##### Guanacco style export of the best conversation threads from the URL database\n- exported August 25, 2023\n- jsonl files with chatml formatted conversations\n- train: 5,023 samples\n- Only English examples\n- Add column to count number of messages" ]
[ 30, 13, 60 ]
[ "passage: TAGS\n#size_categories-1K<n<10K #language-English #license-apache-2.0 #region-us \n#### OpenAssistant TOP-1 Conversation Threads##### Guanacco style export of the best conversation threads from the URL database\n- exported August 25, 2023\n- jsonl files with chatml formatted conversations\n- train: 5,023 samples\n- Only English examples\n- Add column to count number of messages" ]
35d8b50b4c8fd7e4754bbfabde7bd2683885005d
# OpenAssistant TOP-1 Conversation Threads - [Guanacco](https://huggingface.co/datasets/timdettmers/openassistant-guanaco) style export of the best conversation threads from the [open-assistant.io](https://open-assistant.io/) database - exported August 25, 2023 - jsonl files with [chatml](https://github.com/openai/openai-python/blob/main/chatml.md) formatted conversations - train: 12,947 samples - With a column indicating the language used.
dbuos/oasst_top1_2023-08-25_languages
[ "task_categories:conversational", "size_categories:10K<n<100K", "license:apache-2.0", "region:us" ]
2023-12-27T20:26:19+00:00
{"license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["conversational"], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "lang", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 23211220, "num_examples": 12947}], "download_size": 13220375, "dataset_size": 23211220}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-12-27T20:28:47+00:00
[]
[]
TAGS #task_categories-conversational #size_categories-10K<n<100K #license-apache-2.0 #region-us
# OpenAssistant TOP-1 Conversation Threads - Guanacco style export of the best conversation threads from the URL database - exported August 25, 2023 - jsonl files with chatml formatted conversations - train: 12,947 samples - With a column indicating the language used.
[ "# OpenAssistant TOP-1 Conversation Threads\n\n- Guanacco style export of the best conversation threads from the URL database\n- exported August 25, 2023\n- jsonl files with chatml formatted conversations\n- train: 12,947 samples\n- With a column indicating the language used." ]
[ "TAGS\n#task_categories-conversational #size_categories-10K<n<100K #license-apache-2.0 #region-us \n", "# OpenAssistant TOP-1 Conversation Threads\n\n- Guanacco style export of the best conversation threads from the URL database\n- exported August 25, 2023\n- jsonl files with chatml formatted conversations\n- train: 12,947 samples\n- With a column indicating the language used." ]
[ 36, 68 ]
[ "passage: TAGS\n#task_categories-conversational #size_categories-10K<n<100K #license-apache-2.0 #region-us \n# OpenAssistant TOP-1 Conversation Threads\n\n- Guanacco style export of the best conversation threads from the URL database\n- exported August 25, 2023\n- jsonl files with chatml formatted conversations\n- train: 12,947 samples\n- With a column indicating the language used." ]
97bc4669b35bf9a3c843a321285c4b57eb72d9c0
# Dataset of Kobayashi This is the dataset of Kobayashi, containing 552 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------| | raw | 552 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 1305 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | raw-stage3-eyes | 1518 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. | | 384x512 | 552 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x704 | 552 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x880 | 552 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 1305 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 1305 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-p512-640 | 1005 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. | | stage3-eyes-640 | 1518 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. | | stage3-eyes-800 | 1518 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
CyberHarem/kobayashi_kobayashisanchinomaidragon
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-12-27T20:37:53+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-12-27T20:41:59+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of Kobayashi ==================== This is the dataset of Kobayashi, containing 552 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
69b0991ee606079105c4f9e271bcc1fef0014e03
# Multilingual ARC ## Dataset Summary This dataset is a machine translated version of the [ARC dataset](https://huggingface.co/datasets/ai2_arc). The Icelandic (is) part was translated with [Miðeind](https://mideind.is/english.html)'s Greynir model and Norwegian (nb) was translated with [DeepL](https://deepl.com/). The rest of the languages was translated using GPT-3.5-turbo by the University of Oregon, and this part of the dataset was originally uploaded to [this Github repository](https://github.com/nlp-uoregon/mlmm-evaluation).
alexandrainst/m_arc
[ "task_categories:question-answering", "task_ids:multiple-choice-qa", "size_categories:10K<n<100K", "language:ar", "language:bn", "language:ca", "language:da", "language:de", "language:en", "language:es", "language:eu", "language:fr", "language:gu", "language:hi", "language:hr", "language:hu", "language:hy", "language:id", "language:is", "language:it", "language:kn", "language:ml", "language:mr", "language:nb", "language:no", "language:ne", "language:nl", "language:pt", "language:ro", "language:ru", "language:sk", "language:sr", "language:sv", "language:ta", "language:te", "language:uk", "language:vi", "language:zh", "license:cc-by-nc-4.0", "region:us" ]
2023-12-27T20:54:59+00:00
{"language": ["ar", "bn", "ca", "da", "de", "en", "es", "eu", "fr", "gu", "hi", "hr", "hu", "hy", "id", "is", "it", "kn", "ml", "mr", "nb", "no", "ne", "nl", "pt", "ro", "ru", "sk", "sr", "sv", "ta", "te", "uk", "vi", "zh"], "license": "cc-by-nc-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["question-answering"], "task_ids": ["multiple-choice-qa"], "configs": [{"config_name": "ar", "data_files": [{"split": "train", "path": "data/ar/train.jsonl"}, {"split": "val", "path": "data/ar/val.jsonl"}, {"split": "test", "path": "data/ar/test.jsonl"}]}, {"config_name": "bn", "data_files": [{"split": "train", "path": "data/bn/train.jsonl"}, {"split": "val", "path": "data/bn/val.jsonl"}, {"split": "test", "path": "data/bn/test.jsonl"}]}, {"config_name": "ca", "data_files": [{"split": "train", "path": "data/ca/train.jsonl"}, {"split": "val", "path": "data/ca/val.jsonl"}, {"split": "test", "path": "data/ca/test.jsonl"}]}, {"config_name": "da", "data_files": [{"split": "train", "path": "data/da/train.jsonl"}, {"split": "val", "path": "data/da/val.jsonl"}, {"split": "test", "path": "data/da/test.jsonl"}]}, {"config_name": "de", "data_files": [{"split": "train", "path": "data/de/train.jsonl"}, {"split": "val", "path": "data/de/val.jsonl"}, {"split": "test", "path": "data/de/test.jsonl"}]}, {"config_name": "en", "data_files": [{"split": "train", "path": "data/en/train.jsonl"}, {"split": "val", "path": "data/en/val.jsonl"}, {"split": "test", "path": "data/en/test.jsonl"}]}, {"config_name": "es", "data_files": [{"split": "train", "path": "data/es/train.jsonl"}, {"split": "val", "path": "data/es/val.jsonl"}, {"split": "test", "path": "data/es/test.jsonl"}]}, {"config_name": "eu", "data_files": [{"split": "train", "path": "data/eu/train.jsonl"}, {"split": "val", "path": "data/eu/val.jsonl"}, {"split": "test", "path": "data/eu/test.jsonl"}]}, {"config_name": "fr", "data_files": [{"split": "train", "path": "data/fr/train.jsonl"}, {"split": "val", "path": "data/fr/val.jsonl"}, {"split": "test", "path": "data/fr/test.jsonl"}]}, {"config_name": "gu", "data_files": [{"split": "train", "path": "data/gu/train.jsonl"}, {"split": "val", "path": "data/gu/val.jsonl"}, {"split": "test", "path": "data/gu/test.jsonl"}]}, {"config_name": "hi", "data_files": [{"split": "train", "path": "data/hi/train.jsonl"}, {"split": "val", "path": "data/hi/val.jsonl"}, {"split": "test", "path": "data/hi/test.jsonl"}]}, {"config_name": "hr", "data_files": [{"split": "train", "path": "data/hr/train.jsonl"}, {"split": "val", "path": "data/hr/val.jsonl"}, {"split": "test", "path": "data/hr/test.jsonl"}]}, {"config_name": "hu", "data_files": [{"split": "train", "path": "data/hu/train.jsonl"}, {"split": "val", "path": "data/hu/val.jsonl"}, {"split": "test", "path": "data/hu/test.jsonl"}]}, {"config_name": "hy", "data_files": [{"split": "train", "path": "data/hy/train.jsonl"}, {"split": "val", "path": "data/hy/val.jsonl"}, {"split": "test", "path": "data/hy/test.jsonl"}]}, {"config_name": "id", "data_files": [{"split": "train", "path": "data/id/train.jsonl"}, {"split": "val", "path": "data/id/val.jsonl"}, {"split": "test", "path": "data/id/test.jsonl"}]}, {"config_name": "is", "data_files": [{"split": "train", "path": "data/is/train.jsonl"}, {"split": "val", "path": "data/is/val.jsonl"}, {"split": "test", "path": "data/is/test.jsonl"}]}, {"config_name": "it", "data_files": [{"split": "train", "path": "data/it/train.jsonl"}, {"split": "val", "path": "data/it/val.jsonl"}, {"split": "test", "path": "data/it/test.jsonl"}]}, {"config_name": "kn", "data_files": [{"split": "train", "path": "data/kn/train.jsonl"}, {"split": "val", "path": "data/kn/val.jsonl"}, {"split": "test", "path": "data/kn/test.jsonl"}]}, {"config_name": "ml", "data_files": [{"split": "train", "path": "data/ml/train.jsonl"}, {"split": "val", "path": "data/ml/val.jsonl"}, {"split": "test", "path": "data/ml/test.jsonl"}]}, {"config_name": "mr", "data_files": [{"split": "train", "path": "data/mr/train.jsonl"}, {"split": "val", "path": "data/mr/val.jsonl"}, {"split": "test", "path": "data/mr/test.jsonl"}]}, {"config_name": "nb", "data_files": [{"split": "train", "path": "data/nb/train.jsonl"}, {"split": "val", "path": "data/nb/val.jsonl"}, {"split": "test", "path": "data/nb/test.jsonl"}]}, {"config_name": "ne", "data_files": [{"split": "train", "path": "data/ne/train.jsonl"}, {"split": "val", "path": "data/ne/val.jsonl"}, {"split": "test", "path": "data/ne/test.jsonl"}]}, {"config_name": "nl", "data_files": [{"split": "train", "path": "data/nl/train.jsonl"}, {"split": "val", "path": "data/nl/val.jsonl"}, {"split": "test", "path": "data/nl/test.jsonl"}]}, {"config_name": "pt", "data_files": [{"split": "train", "path": "data/pt/train.jsonl"}, {"split": "val", "path": "data/pt/val.jsonl"}, {"split": "test", "path": "data/pt/test.jsonl"}]}, {"config_name": "ro", "data_files": [{"split": "train", "path": "data/ro/train.jsonl"}, {"split": "val", "path": "data/ro/val.jsonl"}, {"split": "test", "path": "data/ro/test.jsonl"}]}, {"config_name": "ru", "data_files": [{"split": "train", "path": "data/ru/train.jsonl"}, {"split": "val", "path": "data/ru/val.jsonl"}, {"split": "test", "path": "data/ru/test.jsonl"}]}, {"config_name": "sk", "data_files": [{"split": "train", "path": "data/sk/train.jsonl"}, {"split": "val", "path": "data/sk/val.jsonl"}, {"split": "test", "path": "data/sk/test.jsonl"}]}, {"config_name": "sr", "data_files": [{"split": "train", "path": "data/sr/train.jsonl"}, {"split": "val", "path": "data/sr/val.jsonl"}, {"split": "test", "path": "data/sr/test.jsonl"}]}, {"config_name": "sv", "data_files": [{"split": "train", "path": "data/sv/train.jsonl"}, {"split": "val", "path": "data/sv/val.jsonl"}, {"split": "test", "path": "data/sv/test.jsonl"}]}, {"config_name": "ta", "data_files": [{"split": "train", "path": "data/ta/train.jsonl"}, {"split": "val", "path": "data/ta/val.jsonl"}, {"split": "test", "path": "data/ta/test.jsonl"}]}, {"config_name": "te", "data_files": [{"split": "train", "path": "data/te/train.jsonl"}, {"split": "val", "path": "data/te/val.jsonl"}, {"split": "test", "path": "data/te/test.jsonl"}]}, {"config_name": "uk", "data_files": [{"split": "train", "path": "data/uk/train.jsonl"}, {"split": "val", "path": "data/uk/val.jsonl"}, {"split": "test", "path": "data/uk/test.jsonl"}]}, {"config_name": "vi", "data_files": [{"split": "train", "path": "data/vi/train.jsonl"}, {"split": "val", "path": "data/vi/val.jsonl"}, {"split": "test", "path": "data/vi/test.jsonl"}]}, {"config_name": "zh", "data_files": [{"split": "train", "path": "data/zh/train.jsonl"}, {"split": "val", "path": "data/zh/val.jsonl"}, {"split": "test", "path": "data/zh/test.jsonl"}]}]}
2024-01-15T14:53:25+00:00
[]
[ "ar", "bn", "ca", "da", "de", "en", "es", "eu", "fr", "gu", "hi", "hr", "hu", "hy", "id", "is", "it", "kn", "ml", "mr", "nb", "no", "ne", "nl", "pt", "ro", "ru", "sk", "sr", "sv", "ta", "te", "uk", "vi", "zh" ]
TAGS #task_categories-question-answering #task_ids-multiple-choice-qa #size_categories-10K<n<100K #language-Arabic #language-Bengali #language-Catalan #language-Danish #language-German #language-English #language-Spanish #language-Basque #language-French #language-Gujarati #language-Hindi #language-Croatian #language-Hungarian #language-Armenian #language-Indonesian #language-Icelandic #language-Italian #language-Kannada #language-Malayalam #language-Marathi #language-Norwegian Bokmål #language-Norwegian #language-Nepali (macrolanguage) #language-Dutch #language-Portuguese #language-Romanian #language-Russian #language-Slovak #language-Serbian #language-Swedish #language-Tamil #language-Telugu #language-Ukrainian #language-Vietnamese #language-Chinese #license-cc-by-nc-4.0 #region-us
# Multilingual ARC ## Dataset Summary This dataset is a machine translated version of the ARC dataset. The Icelandic (is) part was translated with Miðeind's Greynir model and Norwegian (nb) was translated with DeepL. The rest of the languages was translated using GPT-3.5-turbo by the University of Oregon, and this part of the dataset was originally uploaded to this Github repository.
[ "# Multilingual ARC", "## Dataset Summary\nThis dataset is a machine translated version of the ARC dataset.\n\nThe Icelandic (is) part was translated with Miðeind's Greynir model and Norwegian (nb) was translated with DeepL. The rest of the languages was translated using GPT-3.5-turbo by the University of Oregon, and this part of the dataset was originally uploaded to this Github repository." ]
[ "TAGS\n#task_categories-question-answering #task_ids-multiple-choice-qa #size_categories-10K<n<100K #language-Arabic #language-Bengali #language-Catalan #language-Danish #language-German #language-English #language-Spanish #language-Basque #language-French #language-Gujarati #language-Hindi #language-Croatian #language-Hungarian #language-Armenian #language-Indonesian #language-Icelandic #language-Italian #language-Kannada #language-Malayalam #language-Marathi #language-Norwegian Bokmål #language-Norwegian #language-Nepali (macrolanguage) #language-Dutch #language-Portuguese #language-Romanian #language-Russian #language-Slovak #language-Serbian #language-Swedish #language-Tamil #language-Telugu #language-Ukrainian #language-Vietnamese #language-Chinese #license-cc-by-nc-4.0 #region-us \n", "# Multilingual ARC", "## Dataset Summary\nThis dataset is a machine translated version of the ARC dataset.\n\nThe Icelandic (is) part was translated with Miðeind's Greynir model and Norwegian (nb) was translated with DeepL. The rest of the languages was translated using GPT-3.5-turbo by the University of Oregon, and this part of the dataset was originally uploaded to this Github repository." ]
[ 251, 6, 102 ]
[ "passage: TAGS\n#task_categories-question-answering #task_ids-multiple-choice-qa #size_categories-10K<n<100K #language-Arabic #language-Bengali #language-Catalan #language-Danish #language-German #language-English #language-Spanish #language-Basque #language-French #language-Gujarati #language-Hindi #language-Croatian #language-Hungarian #language-Armenian #language-Indonesian #language-Icelandic #language-Italian #language-Kannada #language-Malayalam #language-Marathi #language-Norwegian Bokmål #language-Norwegian #language-Nepali (macrolanguage) #language-Dutch #language-Portuguese #language-Romanian #language-Russian #language-Slovak #language-Serbian #language-Swedish #language-Tamil #language-Telugu #language-Ukrainian #language-Vietnamese #language-Chinese #license-cc-by-nc-4.0 #region-us \n# Multilingual ARC## Dataset Summary\nThis dataset is a machine translated version of the ARC dataset.\n\nThe Icelandic (is) part was translated with Miðeind's Greynir model and Norwegian (nb) was translated with DeepL. The rest of the languages was translated using GPT-3.5-turbo by the University of Oregon, and this part of the dataset was originally uploaded to this Github repository." ]
9d31dc982bd6285e081e3e3136332a38b9c1d7b7
# Multilingual HellaSwag ## Dataset Summary This dataset is a machine translated version of the [HellaSwag dataset](https://huggingface.co/datasets/Rowan/hellaswag). The Icelandic (is) part was translated with [Miðeind](https://mideind.is/english.html)'s Greynir model and Norwegian (nb) was translated with [DeepL](https://deepl.com/). The rest of the languages was translated using GPT-3.5-turbo by the University of Oregon, and this part of the dataset was originally uploaded to [this Github repository](https://github.com/nlp-uoregon/mlmm-evaluation).
alexandrainst/m_hellaswag
[ "task_categories:question-answering", "task_ids:multiple-choice-qa", "size_categories:10K<n<100K", "language:ar", "language:bn", "language:ca", "language:da", "language:de", "language:es", "language:eu", "language:fr", "language:gu", "language:hi", "language:hr", "language:hu", "language:hy", "language:id", "language:it", "language:kn", "language:ml", "language:mr", "language:ne", "language:nl", "language:pt", "language:ro", "language:ru", "language:sk", "language:sr", "language:sv", "language:ta", "language:te", "language:uk", "language:vi", "language:zh", "language:is", "language:en", "language:no", "language:nb", "license:cc-by-nc-4.0", "region:us" ]
2023-12-27T20:55:26+00:00
{"language": ["ar", "bn", "ca", "da", "de", "es", "eu", "fr", "gu", "hi", "hr", "hu", "hy", "id", "it", "kn", "ml", "mr", "ne", "nl", "pt", "ro", "ru", "sk", "sr", "sv", "ta", "te", "uk", "vi", "zh", "is", "en", "no", "nb"], "license": "cc-by-nc-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["question-answering"], "task_ids": ["multiple-choice-qa"], "configs": [{"config_name": "ar", "data_files": [{"split": "val", "path": "data/ar/val.jsonl"}]}, {"config_name": "bn", "data_files": [{"split": "val", "path": "data/bn/val.jsonl"}]}, {"config_name": "ca", "data_files": [{"split": "val", "path": "data/ca/val.jsonl"}]}, {"config_name": "da", "data_files": [{"split": "val", "path": "data/da/val.jsonl"}]}, {"config_name": "de", "data_files": [{"split": "val", "path": "data/de/val.jsonl"}]}, {"config_name": "es", "data_files": [{"split": "val", "path": "data/es/val.jsonl"}]}, {"config_name": "eu", "data_files": [{"split": "val", "path": "data/eu/val.jsonl"}]}, {"config_name": "fr", "data_files": [{"split": "val", "path": "data/fr/val.jsonl"}]}, {"config_name": "gu", "data_files": [{"split": "val", "path": "data/gu/val.jsonl"}]}, {"config_name": "hi", "data_files": [{"split": "val", "path": "data/hi/val.jsonl"}]}, {"config_name": "hr", "data_files": [{"split": "val", "path": "data/hr/val.jsonl"}]}, {"config_name": "hu", "data_files": [{"split": "val", "path": "data/hu/val.jsonl"}]}, {"config_name": "hy", "data_files": [{"split": "val", "path": "data/hy/val.jsonl"}]}, {"config_name": "id", "data_files": [{"split": "val", "path": "data/id/val.jsonl"}]}, {"config_name": "it", "data_files": [{"split": "val", "path": "data/it/val.jsonl"}]}, {"config_name": "kn", "data_files": [{"split": "val", "path": "data/kn/val.jsonl"}]}, {"config_name": "ml", "data_files": [{"split": "val", "path": "data/ml/val.jsonl"}]}, {"config_name": "mr", "data_files": [{"split": "val", "path": "data/mr/val.jsonl"}]}, {"config_name": "ne", "data_files": [{"split": "val", "path": "data/ne/val.jsonl"}]}, {"config_name": "nl", "data_files": [{"split": "val", "path": "data/nl/val.jsonl"}]}, {"config_name": "pt", "data_files": [{"split": "val", "path": "data/pt/val.jsonl"}]}, {"config_name": "ro", "data_files": [{"split": "val", "path": "data/ro/val.jsonl"}]}, {"config_name": "ru", "data_files": [{"split": "val", "path": "data/ru/val.jsonl"}]}, {"config_name": "sk", "data_files": [{"split": "val", "path": "data/sk/val.jsonl"}]}, {"config_name": "sr", "data_files": [{"split": "val", "path": "data/sr/val.jsonl"}]}, {"config_name": "sv", "data_files": [{"split": "val", "path": "data/sv/val.jsonl"}]}, {"config_name": "ta", "data_files": [{"split": "val", "path": "data/ta/val.jsonl"}]}, {"config_name": "te", "data_files": [{"split": "val", "path": "data/te/val.jsonl"}]}, {"config_name": "uk", "data_files": [{"split": "val", "path": "data/uk/val.jsonl"}]}, {"config_name": "vi", "data_files": [{"split": "val", "path": "data/vi/val.jsonl"}]}, {"config_name": "zh", "data_files": [{"split": "val", "path": "data/zh/val.jsonl"}]}, {"config_name": "en", "data_files": [{"split": "val", "path": "data/en/val.jsonl"}]}, {"config_name": "is", "data_files": [{"split": "val", "path": "data/is/val.jsonl"}]}, {"config_name": "nb", "data_files": [{"split": "val", "path": "data/nb/val.jsonl"}]}]}
2024-02-12T16:32:54+00:00
[]
[ "ar", "bn", "ca", "da", "de", "es", "eu", "fr", "gu", "hi", "hr", "hu", "hy", "id", "it", "kn", "ml", "mr", "ne", "nl", "pt", "ro", "ru", "sk", "sr", "sv", "ta", "te", "uk", "vi", "zh", "is", "en", "no", "nb" ]
TAGS #task_categories-question-answering #task_ids-multiple-choice-qa #size_categories-10K<n<100K #language-Arabic #language-Bengali #language-Catalan #language-Danish #language-German #language-Spanish #language-Basque #language-French #language-Gujarati #language-Hindi #language-Croatian #language-Hungarian #language-Armenian #language-Indonesian #language-Italian #language-Kannada #language-Malayalam #language-Marathi #language-Nepali (macrolanguage) #language-Dutch #language-Portuguese #language-Romanian #language-Russian #language-Slovak #language-Serbian #language-Swedish #language-Tamil #language-Telugu #language-Ukrainian #language-Vietnamese #language-Chinese #language-Icelandic #language-English #language-Norwegian #language-Norwegian Bokmål #license-cc-by-nc-4.0 #region-us
# Multilingual HellaSwag ## Dataset Summary This dataset is a machine translated version of the HellaSwag dataset. The Icelandic (is) part was translated with Miðeind's Greynir model and Norwegian (nb) was translated with DeepL. The rest of the languages was translated using GPT-3.5-turbo by the University of Oregon, and this part of the dataset was originally uploaded to this Github repository.
[ "# Multilingual HellaSwag", "## Dataset Summary\nThis dataset is a machine translated version of the HellaSwag dataset.\n\nThe Icelandic (is) part was translated with Miðeind's Greynir model and Norwegian (nb) was translated with DeepL. The rest of the languages was translated using GPT-3.5-turbo by the University of Oregon, and this part of the dataset was originally uploaded to this Github repository." ]
[ "TAGS\n#task_categories-question-answering #task_ids-multiple-choice-qa #size_categories-10K<n<100K #language-Arabic #language-Bengali #language-Catalan #language-Danish #language-German #language-Spanish #language-Basque #language-French #language-Gujarati #language-Hindi #language-Croatian #language-Hungarian #language-Armenian #language-Indonesian #language-Italian #language-Kannada #language-Malayalam #language-Marathi #language-Nepali (macrolanguage) #language-Dutch #language-Portuguese #language-Romanian #language-Russian #language-Slovak #language-Serbian #language-Swedish #language-Tamil #language-Telugu #language-Ukrainian #language-Vietnamese #language-Chinese #language-Icelandic #language-English #language-Norwegian #language-Norwegian Bokmål #license-cc-by-nc-4.0 #region-us \n", "# Multilingual HellaSwag", "## Dataset Summary\nThis dataset is a machine translated version of the HellaSwag dataset.\n\nThe Icelandic (is) part was translated with Miðeind's Greynir model and Norwegian (nb) was translated with DeepL. The rest of the languages was translated using GPT-3.5-turbo by the University of Oregon, and this part of the dataset was originally uploaded to this Github repository." ]
[ 251, 8, 104 ]
[ "passage: TAGS\n#task_categories-question-answering #task_ids-multiple-choice-qa #size_categories-10K<n<100K #language-Arabic #language-Bengali #language-Catalan #language-Danish #language-German #language-Spanish #language-Basque #language-French #language-Gujarati #language-Hindi #language-Croatian #language-Hungarian #language-Armenian #language-Indonesian #language-Italian #language-Kannada #language-Malayalam #language-Marathi #language-Nepali (macrolanguage) #language-Dutch #language-Portuguese #language-Romanian #language-Russian #language-Slovak #language-Serbian #language-Swedish #language-Tamil #language-Telugu #language-Ukrainian #language-Vietnamese #language-Chinese #language-Icelandic #language-English #language-Norwegian #language-Norwegian Bokmål #license-cc-by-nc-4.0 #region-us \n# Multilingual HellaSwag## Dataset Summary\nThis dataset is a machine translated version of the HellaSwag dataset.\n\nThe Icelandic (is) part was translated with Miðeind's Greynir model and Norwegian (nb) was translated with DeepL. The rest of the languages was translated using GPT-3.5-turbo by the University of Oregon, and this part of the dataset was originally uploaded to this Github repository." ]
a368e903d97d76aa9b3bf81f3f32344379ab29af
# Dataset of Quetzalcoatl This is the dataset of Quetzalcoatl, containing 154 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------| | raw | 154 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 369 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | raw-stage3-eyes | 434 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. | | 384x512 | 154 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x704 | 154 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x880 | 154 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 369 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 369 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-p512-640 | 307 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. | | stage3-eyes-640 | 434 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. | | stage3-eyes-800 | 434 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
CyberHarem/quetzalcoatl_kobayashisanchinomaidragon
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-12-27T20:56:09+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-12-27T20:57:07+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of Quetzalcoatl ======================= This is the dataset of Quetzalcoatl, containing 154 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
c343b0e0a73391b5c78d87aa409f60e77b1ba0fc
# Multilingual MMLU ## Dataset Summary This dataset is a machine translated version of the [MMLU dataset](https://huggingface.co/datasets/cais/mmlu). The Icelandic (is) part was translated with [Miðeind](https://mideind.is/english.html)'s Greynir model and Norwegian (nb) was translated with [DeepL](https://deepl.com/). The rest of the languages was translated using GPT-3.5-turbo by the University of Oregon, and this part of the dataset was originally uploaded to [this Github repository](https://github.com/nlp-uoregon/mlmm-evaluation).
alexandrainst/m_mmlu
[ "task_categories:question-answering", "task_ids:multiple-choice-qa", "size_categories:10K<n<100K", "language:ar", "language:bn", "language:ca", "language:da", "language:de", "language:en", "language:es", "language:eu", "language:fr", "language:gu", "language:hi", "language:hr", "language:hu", "language:hy", "language:id", "language:is", "language:it", "language:kn", "language:ml", "language:mr", "language:nb", "language:no", "language:ne", "language:nl", "language:pt", "language:ro", "language:ru", "language:sk", "language:sr", "language:sv", "language:ta", "language:te", "language:uk", "language:vi", "language:zh", "license:cc-by-nc-4.0", "region:us" ]
2023-12-27T20:56:17+00:00
{"language": ["ar", "bn", "ca", "da", "de", "en", "es", "eu", "fr", "gu", "hi", "hr", "hu", "hy", "id", "is", "it", "kn", "ml", "mr", "nb", "no", "ne", "nl", "pt", "ro", "ru", "sk", "sr", "sv", "ta", "te", "uk", "vi", "zh"], "license": "cc-by-nc-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["question-answering"], "task_ids": ["multiple-choice-qa"], "configs": [{"config_name": "ar", "data_files": [{"split": "train", "path": "data/ar/train.jsonl"}, {"split": "val", "path": "data/ar/val.jsonl"}, {"split": "test", "path": "data/ar/test.jsonl"}]}, {"config_name": "bn", "data_files": [{"split": "train", "path": "data/bn/train.jsonl"}, {"split": "val", "path": "data/bn/val.jsonl"}, {"split": "test", "path": "data/bn/test.jsonl"}]}, {"config_name": "ca", "data_files": [{"split": "train", "path": "data/ca/train.jsonl"}, {"split": "val", "path": "data/ca/val.jsonl"}, {"split": "test", "path": "data/ca/test.jsonl"}]}, {"config_name": "da", "data_files": [{"split": "train", "path": "data/da/train.jsonl"}, {"split": "val", "path": "data/da/val.jsonl"}, {"split": "test", "path": "data/da/test.jsonl"}]}, {"config_name": "de", "data_files": [{"split": "train", "path": "data/de/train.jsonl"}, {"split": "val", "path": "data/de/val.jsonl"}, {"split": "test", "path": "data/de/test.jsonl"}]}, {"config_name": "en", "data_files": [{"split": "train", "path": "data/en/train.jsonl"}, {"split": "val", "path": "data/en/val.jsonl"}, {"split": "test", "path": "data/en/test.jsonl"}]}, {"config_name": "es", "data_files": [{"split": "train", "path": "data/es/train.jsonl"}, {"split": "val", "path": "data/es/val.jsonl"}, {"split": "test", "path": "data/es/test.jsonl"}]}, {"config_name": "eu", "data_files": [{"split": "train", "path": "data/eu/train.jsonl"}, {"split": "val", "path": "data/eu/val.jsonl"}, {"split": "test", "path": "data/eu/test.jsonl"}]}, {"config_name": "fr", "data_files": [{"split": "train", "path": "data/fr/train.jsonl"}, {"split": "val", "path": "data/fr/val.jsonl"}, {"split": "test", "path": "data/fr/test.jsonl"}]}, {"config_name": "gu", "data_files": [{"split": "train", "path": "data/gu/train.jsonl"}, {"split": "val", "path": "data/gu/val.jsonl"}, {"split": "test", "path": "data/gu/test.jsonl"}]}, {"config_name": "hi", "data_files": [{"split": "train", "path": "data/hi/train.jsonl"}, {"split": "val", "path": "data/hi/val.jsonl"}, {"split": "test", "path": "data/hi/test.jsonl"}]}, {"config_name": "hr", "data_files": [{"split": "train", "path": "data/hr/train.jsonl"}, {"split": "val", "path": "data/hr/val.jsonl"}, {"split": "test", "path": "data/hr/test.jsonl"}]}, {"config_name": "hu", "data_files": [{"split": "train", "path": "data/hu/train.jsonl"}, {"split": "val", "path": "data/hu/val.jsonl"}, {"split": "test", "path": "data/hu/test.jsonl"}]}, {"config_name": "hy", "data_files": [{"split": "train", "path": "data/hy/train.jsonl"}, {"split": "val", "path": "data/hy/val.jsonl"}, {"split": "test", "path": "data/hy/test.jsonl"}]}, {"config_name": "id", "data_files": [{"split": "train", "path": "data/id/train.jsonl"}, {"split": "val", "path": "data/id/val.jsonl"}, {"split": "test", "path": "data/id/test.jsonl"}]}, {"config_name": "is", "data_files": [{"split": "train", "path": "data/is/train.jsonl"}, {"split": "val", "path": "data/is/val.jsonl"}, {"split": "test", "path": "data/is/test.jsonl"}]}, {"config_name": "it", "data_files": [{"split": "train", "path": "data/it/train.jsonl"}, {"split": "val", "path": "data/it/val.jsonl"}, {"split": "test", "path": "data/it/test.jsonl"}]}, {"config_name": "kn", "data_files": [{"split": "train", "path": "data/kn/train.jsonl"}, {"split": "val", "path": "data/kn/val.jsonl"}, {"split": "test", "path": "data/kn/test.jsonl"}]}, {"config_name": "ml", "data_files": [{"split": "train", "path": "data/ml/train.jsonl"}, {"split": "val", "path": "data/ml/val.jsonl"}, {"split": "test", "path": "data/ml/test.jsonl"}]}, {"config_name": "mr", "data_files": [{"split": "train", "path": "data/mr/train.jsonl"}, {"split": "val", "path": "data/mr/val.jsonl"}, {"split": "test", "path": "data/mr/test.jsonl"}]}, {"config_name": "nb", "data_files": [{"split": "train", "path": "data/nb/train.jsonl"}, {"split": "val", "path": "data/nb/val.jsonl"}, {"split": "test", "path": "data/nb/test.jsonl"}]}, {"config_name": "ne", "data_files": [{"split": "train", "path": "data/ne/train.jsonl"}, {"split": "val", "path": "data/ne/val.jsonl"}, {"split": "test", "path": "data/ne/test.jsonl"}]}, {"config_name": "nl", "data_files": [{"split": "train", "path": "data/nl/train.jsonl"}, {"split": "val", "path": "data/nl/val.jsonl"}, {"split": "test", "path": "data/nl/test.jsonl"}]}, {"config_name": "pt", "data_files": [{"split": "train", "path": "data/pt/train.jsonl"}, {"split": "val", "path": "data/pt/val.jsonl"}, {"split": "test", "path": "data/pt/test.jsonl"}]}, {"config_name": "ro", "data_files": [{"split": "train", "path": "data/ro/train.jsonl"}, {"split": "val", "path": "data/ro/val.jsonl"}, {"split": "test", "path": "data/ro/test.jsonl"}]}, {"config_name": "ru", "data_files": [{"split": "train", "path": "data/ru/train.jsonl"}, {"split": "val", "path": "data/ru/val.jsonl"}, {"split": "test", "path": "data/ru/test.jsonl"}]}, {"config_name": "sk", "data_files": [{"split": "train", "path": "data/sk/train.jsonl"}, {"split": "val", "path": "data/sk/val.jsonl"}, {"split": "test", "path": "data/sk/test.jsonl"}]}, {"config_name": "sr", "data_files": [{"split": "train", "path": "data/sr/train.jsonl"}, {"split": "val", "path": "data/sr/val.jsonl"}, {"split": "test", "path": "data/sr/test.jsonl"}]}, {"config_name": "sv", "data_files": [{"split": "train", "path": "data/sv/train.jsonl"}, {"split": "val", "path": "data/sv/val.jsonl"}, {"split": "test", "path": "data/sv/test.jsonl"}]}, {"config_name": "ta", "data_files": [{"split": "train", "path": "data/ta/train.jsonl"}, {"split": "val", "path": "data/ta/val.jsonl"}, {"split": "test", "path": "data/ta/test.jsonl"}]}, {"config_name": "te", "data_files": [{"split": "train", "path": "data/te/train.jsonl"}, {"split": "val", "path": "data/te/val.jsonl"}, {"split": "test", "path": "data/te/test.jsonl"}]}, {"config_name": "uk", "data_files": [{"split": "train", "path": "data/uk/train.jsonl"}, {"split": "val", "path": "data/uk/val.jsonl"}, {"split": "test", "path": "data/uk/test.jsonl"}]}, {"config_name": "vi", "data_files": [{"split": "train", "path": "data/vi/train.jsonl"}, {"split": "val", "path": "data/vi/val.jsonl"}, {"split": "test", "path": "data/vi/test.jsonl"}]}, {"config_name": "zh", "data_files": [{"split": "train", "path": "data/zh/train.jsonl"}, {"split": "val", "path": "data/zh/val.jsonl"}, {"split": "test", "path": "data/zh/test.jsonl"}]}]}
2024-02-12T16:30:24+00:00
[]
[ "ar", "bn", "ca", "da", "de", "en", "es", "eu", "fr", "gu", "hi", "hr", "hu", "hy", "id", "is", "it", "kn", "ml", "mr", "nb", "no", "ne", "nl", "pt", "ro", "ru", "sk", "sr", "sv", "ta", "te", "uk", "vi", "zh" ]
TAGS #task_categories-question-answering #task_ids-multiple-choice-qa #size_categories-10K<n<100K #language-Arabic #language-Bengali #language-Catalan #language-Danish #language-German #language-English #language-Spanish #language-Basque #language-French #language-Gujarati #language-Hindi #language-Croatian #language-Hungarian #language-Armenian #language-Indonesian #language-Icelandic #language-Italian #language-Kannada #language-Malayalam #language-Marathi #language-Norwegian Bokmål #language-Norwegian #language-Nepali (macrolanguage) #language-Dutch #language-Portuguese #language-Romanian #language-Russian #language-Slovak #language-Serbian #language-Swedish #language-Tamil #language-Telugu #language-Ukrainian #language-Vietnamese #language-Chinese #license-cc-by-nc-4.0 #region-us
# Multilingual MMLU ## Dataset Summary This dataset is a machine translated version of the MMLU dataset. The Icelandic (is) part was translated with Miðeind's Greynir model and Norwegian (nb) was translated with DeepL. The rest of the languages was translated using GPT-3.5-turbo by the University of Oregon, and this part of the dataset was originally uploaded to this Github repository.
[ "# Multilingual MMLU", "## Dataset Summary\nThis dataset is a machine translated version of the MMLU dataset. \n\nThe Icelandic (is) part was translated with Miðeind's Greynir model and Norwegian (nb) was translated with DeepL. The rest of the languages was translated using GPT-3.5-turbo by the University of Oregon, and this part of the dataset was originally uploaded to this Github repository." ]
[ "TAGS\n#task_categories-question-answering #task_ids-multiple-choice-qa #size_categories-10K<n<100K #language-Arabic #language-Bengali #language-Catalan #language-Danish #language-German #language-English #language-Spanish #language-Basque #language-French #language-Gujarati #language-Hindi #language-Croatian #language-Hungarian #language-Armenian #language-Indonesian #language-Icelandic #language-Italian #language-Kannada #language-Malayalam #language-Marathi #language-Norwegian Bokmål #language-Norwegian #language-Nepali (macrolanguage) #language-Dutch #language-Portuguese #language-Romanian #language-Russian #language-Slovak #language-Serbian #language-Swedish #language-Tamil #language-Telugu #language-Ukrainian #language-Vietnamese #language-Chinese #license-cc-by-nc-4.0 #region-us \n", "# Multilingual MMLU", "## Dataset Summary\nThis dataset is a machine translated version of the MMLU dataset. \n\nThe Icelandic (is) part was translated with Miðeind's Greynir model and Norwegian (nb) was translated with DeepL. The rest of the languages was translated using GPT-3.5-turbo by the University of Oregon, and this part of the dataset was originally uploaded to this Github repository." ]
[ 251, 6, 102 ]
[ "passage: TAGS\n#task_categories-question-answering #task_ids-multiple-choice-qa #size_categories-10K<n<100K #language-Arabic #language-Bengali #language-Catalan #language-Danish #language-German #language-English #language-Spanish #language-Basque #language-French #language-Gujarati #language-Hindi #language-Croatian #language-Hungarian #language-Armenian #language-Indonesian #language-Icelandic #language-Italian #language-Kannada #language-Malayalam #language-Marathi #language-Norwegian Bokmål #language-Norwegian #language-Nepali (macrolanguage) #language-Dutch #language-Portuguese #language-Romanian #language-Russian #language-Slovak #language-Serbian #language-Swedish #language-Tamil #language-Telugu #language-Ukrainian #language-Vietnamese #language-Chinese #license-cc-by-nc-4.0 #region-us \n# Multilingual MMLU## Dataset Summary\nThis dataset is a machine translated version of the MMLU dataset. \n\nThe Icelandic (is) part was translated with Miðeind's Greynir model and Norwegian (nb) was translated with DeepL. The rest of the languages was translated using GPT-3.5-turbo by the University of Oregon, and this part of the dataset was originally uploaded to this Github repository." ]
f0445d470f1925882b990f5f247fdcf288972f60
# Multilingual TruthfulQA ## Dataset Summary This dataset is a machine translated version of the [TruthfulQA dataset](https://huggingface.co/datasets/truthful_qa), translated using GPT-3.5-turbo. This dataset was created by the University of Oregon, and was originally uploaded to [this Github repository](https://github.com/nlp-uoregon/mlmm-evaluation). ## Citation If you use this dataset in your work, please cite the following paper: ```bibtex @article{dac2023okapi, title={Okapi: Instruction-tuned Large Language Models in Multiple Languages with Reinforcement Learning from Human Feedback}, author={Dac Lai, Viet and Van Nguyen, Chien and Ngo, Nghia Trung and Nguyen, Thuat and Dernoncourt, Franck and Rossi, Ryan A and Nguyen, Thien Huu}, journal={arXiv e-prints}, pages={arXiv--2307}, year={2023} } ```
alexandrainst/m_truthfulqa
[ "task_categories:question-answering", "task_ids:multiple-choice-qa", "size_categories:10K<n<100K", "language:ar", "language:bn", "language:ca", "language:da", "language:de", "language:es", "language:eu", "language:fr", "language:gu", "language:hi", "language:hr", "language:hu", "language:hy", "language:id", "language:it", "language:kn", "language:ml", "language:mr", "language:ne", "language:nl", "language:pt", "language:ro", "language:ru", "language:sk", "language:sr", "language:sv", "language:ta", "language:te", "language:uk", "language:vi", "language:zh", "license:cc-by-nc-4.0", "region:us" ]
2023-12-27T20:56:57+00:00
{"language": ["ar", "bn", "ca", "da", "de", "es", "eu", "fr", "gu", "hi", "hr", "hu", "hy", "id", "it", "kn", "ml", "mr", "ne", "nl", "pt", "ro", "ru", "sk", "sr", "sv", "ta", "te", "uk", "vi", "zh"], "license": "cc-by-nc-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["question-answering"], "task_ids": ["multiple-choice-qa"], "configs": [{"config_name": "ar", "data_files": [{"split": "val", "path": "data/ar/val.jsonl"}]}, {"config_name": "bn", "data_files": [{"split": "val", "path": "data/bn/val.jsonl"}]}, {"config_name": "ca", "data_files": [{"split": "val", "path": "data/ca/val.jsonl"}]}, {"config_name": "da", "data_files": [{"split": "val", "path": "data/da/val.jsonl"}]}, {"config_name": "de", "data_files": [{"split": "val", "path": "data/de/val.jsonl"}]}, {"config_name": "es", "data_files": [{"split": "val", "path": "data/es/val.jsonl"}]}, {"config_name": "eu", "data_files": [{"split": "val", "path": "data/eu/val.jsonl"}]}, {"config_name": "fr", "data_files": [{"split": "val", "path": "data/fr/val.jsonl"}]}, {"config_name": "gu", "data_files": [{"split": "val", "path": "data/gu/val.jsonl"}]}, {"config_name": "hi", "data_files": [{"split": "val", "path": "data/hi/val.jsonl"}]}, {"config_name": "hr", "data_files": [{"split": "val", "path": "data/hr/val.jsonl"}]}, {"config_name": "hu", "data_files": [{"split": "val", "path": "data/hu/val.jsonl"}]}, {"config_name": "hy", "data_files": [{"split": "val", "path": "data/hy/val.jsonl"}]}, {"config_name": "id", "data_files": [{"split": "val", "path": "data/id/val.jsonl"}]}, {"config_name": "it", "data_files": [{"split": "val", "path": "data/it/val.jsonl"}]}, {"config_name": "kn", "data_files": [{"split": "val", "path": "data/kn/val.jsonl"}]}, {"config_name": "ml", "data_files": [{"split": "val", "path": "data/ml/val.jsonl"}]}, {"config_name": "mr", "data_files": [{"split": "val", "path": "data/mr/val.jsonl"}]}, {"config_name": "ne", "data_files": [{"split": "val", "path": "data/ne/val.jsonl"}]}, {"config_name": "nl", "data_files": [{"split": "val", "path": "data/nl/val.jsonl"}]}, {"config_name": "pt", "data_files": [{"split": "val", "path": "data/pt/val.jsonl"}]}, {"config_name": "ro", "data_files": [{"split": "val", "path": "data/ro/val.jsonl"}]}, {"config_name": "ru", "data_files": [{"split": "val", "path": "data/ru/val.jsonl"}]}, {"config_name": "sk", "data_files": [{"split": "val", "path": "data/sk/val.jsonl"}]}, {"config_name": "sr", "data_files": [{"split": "val", "path": "data/sr/val.jsonl"}]}, {"config_name": "sv", "data_files": [{"split": "val", "path": "data/sv/val.jsonl"}]}, {"config_name": "ta", "data_files": [{"split": "val", "path": "data/ta/val.jsonl"}]}, {"config_name": "te", "data_files": [{"split": "val", "path": "data/te/val.jsonl"}]}, {"config_name": "uk", "data_files": [{"split": "val", "path": "data/uk/val.jsonl"}]}, {"config_name": "vi", "data_files": [{"split": "val", "path": "data/vi/val.jsonl"}]}, {"config_name": "zh", "data_files": [{"split": "val", "path": "data/zh/val.jsonl"}]}]}
2023-12-27T20:56:58+00:00
[]
[ "ar", "bn", "ca", "da", "de", "es", "eu", "fr", "gu", "hi", "hr", "hu", "hy", "id", "it", "kn", "ml", "mr", "ne", "nl", "pt", "ro", "ru", "sk", "sr", "sv", "ta", "te", "uk", "vi", "zh" ]
TAGS #task_categories-question-answering #task_ids-multiple-choice-qa #size_categories-10K<n<100K #language-Arabic #language-Bengali #language-Catalan #language-Danish #language-German #language-Spanish #language-Basque #language-French #language-Gujarati #language-Hindi #language-Croatian #language-Hungarian #language-Armenian #language-Indonesian #language-Italian #language-Kannada #language-Malayalam #language-Marathi #language-Nepali (macrolanguage) #language-Dutch #language-Portuguese #language-Romanian #language-Russian #language-Slovak #language-Serbian #language-Swedish #language-Tamil #language-Telugu #language-Ukrainian #language-Vietnamese #language-Chinese #license-cc-by-nc-4.0 #region-us
# Multilingual TruthfulQA ## Dataset Summary This dataset is a machine translated version of the TruthfulQA dataset, translated using GPT-3.5-turbo. This dataset was created by the University of Oregon, and was originally uploaded to this Github repository. If you use this dataset in your work, please cite the following paper:
[ "# Multilingual TruthfulQA", "## Dataset Summary\nThis dataset is a machine translated version of the TruthfulQA dataset, translated using GPT-3.5-turbo. This dataset was created by the University of Oregon, and was originally uploaded to this Github repository.\n\nIf you use this dataset in your work, please cite the following paper:" ]
[ "TAGS\n#task_categories-question-answering #task_ids-multiple-choice-qa #size_categories-10K<n<100K #language-Arabic #language-Bengali #language-Catalan #language-Danish #language-German #language-Spanish #language-Basque #language-French #language-Gujarati #language-Hindi #language-Croatian #language-Hungarian #language-Armenian #language-Indonesian #language-Italian #language-Kannada #language-Malayalam #language-Marathi #language-Nepali (macrolanguage) #language-Dutch #language-Portuguese #language-Romanian #language-Russian #language-Slovak #language-Serbian #language-Swedish #language-Tamil #language-Telugu #language-Ukrainian #language-Vietnamese #language-Chinese #license-cc-by-nc-4.0 #region-us \n", "# Multilingual TruthfulQA", "## Dataset Summary\nThis dataset is a machine translated version of the TruthfulQA dataset, translated using GPT-3.5-turbo. This dataset was created by the University of Oregon, and was originally uploaded to this Github repository.\n\nIf you use this dataset in your work, please cite the following paper:" ]
[ 226, 7, 77 ]
[ "passage: TAGS\n#task_categories-question-answering #task_ids-multiple-choice-qa #size_categories-10K<n<100K #language-Arabic #language-Bengali #language-Catalan #language-Danish #language-German #language-Spanish #language-Basque #language-French #language-Gujarati #language-Hindi #language-Croatian #language-Hungarian #language-Armenian #language-Indonesian #language-Italian #language-Kannada #language-Malayalam #language-Marathi #language-Nepali (macrolanguage) #language-Dutch #language-Portuguese #language-Romanian #language-Russian #language-Slovak #language-Serbian #language-Swedish #language-Tamil #language-Telugu #language-Ukrainian #language-Vietnamese #language-Chinese #license-cc-by-nc-4.0 #region-us \n# Multilingual TruthfulQA## Dataset Summary\nThis dataset is a machine translated version of the TruthfulQA dataset, translated using GPT-3.5-turbo. This dataset was created by the University of Oregon, and was originally uploaded to this Github repository.\n\nIf you use this dataset in your work, please cite the following paper:" ]
492b908dcc7ea201b64cb24b2d839db08385fa1c
# Spanish Wiktionary ## Motivation Multilingual datasets based in Wikimedia Foundation's Wiktionary tend to use its translation system to fetch non English words, what causes a lot of words and definitions being discarded. ## Development In order to solve this, I wrote a [custom parser](https://github.com/elcapo/eswiktionary_parser) that obtains the definitions straight from a dump of the Spanish Wiktionary. Both the parser and the dataset will be developed in harmony. ## Stage Both the parser and this dataset are in a very early stage of development but they already provide a list of 873.990 definitions that are easy to read and process for machine learning purposes. ## Contact Feel free to [contact me](https://github.com/elcapo) if you are interested in contributing with either the parser, or the dataset.
carloscapote/es.wiktionary.org
[ "size_categories:100K<n<1M", "language:es", "license:cc-by-sa-4.0", "region:us" ]
2023-12-27T21:07:43+00:00
{"language": ["es"], "license": "cc-by-sa-4.0", "size_categories": ["100K<n<1M"], "pretty_name": "Spanish Wiktionary"}
2023-12-28T23:15:19+00:00
[]
[ "es" ]
TAGS #size_categories-100K<n<1M #language-Spanish #license-cc-by-sa-4.0 #region-us
# Spanish Wiktionary ## Motivation Multilingual datasets based in Wikimedia Foundation's Wiktionary tend to use its translation system to fetch non English words, what causes a lot of words and definitions being discarded. ## Development In order to solve this, I wrote a custom parser that obtains the definitions straight from a dump of the Spanish Wiktionary. Both the parser and the dataset will be developed in harmony. ## Stage Both the parser and this dataset are in a very early stage of development but they already provide a list of 873.990 definitions that are easy to read and process for machine learning purposes. ## Contact Feel free to contact me if you are interested in contributing with either the parser, or the dataset.
[ "# Spanish Wiktionary", "## Motivation\n\nMultilingual datasets based in Wikimedia Foundation's Wiktionary tend to use its translation system to fetch non English words, what causes a lot of words and definitions being discarded.", "## Development\n\nIn order to solve this, I wrote a custom parser that obtains the definitions straight from a dump of the Spanish Wiktionary. Both the parser and the dataset will be developed in harmony.", "## Stage\n\nBoth the parser and this dataset are in a very early stage of development but they already provide a list of 873.990 definitions that are easy to read and process for machine learning purposes.", "## Contact\n\nFeel free to contact me if you are interested in contributing with either the parser, or the dataset." ]
[ "TAGS\n#size_categories-100K<n<1M #language-Spanish #license-cc-by-sa-4.0 #region-us \n", "# Spanish Wiktionary", "## Motivation\n\nMultilingual datasets based in Wikimedia Foundation's Wiktionary tend to use its translation system to fetch non English words, what causes a lot of words and definitions being discarded.", "## Development\n\nIn order to solve this, I wrote a custom parser that obtains the definitions straight from a dump of the Spanish Wiktionary. Both the parser and the dataset will be developed in harmony.", "## Stage\n\nBoth the parser and this dataset are in a very early stage of development but they already provide a list of 873.990 definitions that are easy to read and process for machine learning purposes.", "## Contact\n\nFeel free to contact me if you are interested in contributing with either the parser, or the dataset." ]
[ 34, 5, 44, 48, 43, 25 ]
[ "passage: TAGS\n#size_categories-100K<n<1M #language-Spanish #license-cc-by-sa-4.0 #region-us \n# Spanish Wiktionary## Motivation\n\nMultilingual datasets based in Wikimedia Foundation's Wiktionary tend to use its translation system to fetch non English words, what causes a lot of words and definitions being discarded.## Development\n\nIn order to solve this, I wrote a custom parser that obtains the definitions straight from a dump of the Spanish Wiktionary. Both the parser and the dataset will be developed in harmony.## Stage\n\nBoth the parser and this dataset are in a very early stage of development but they already provide a list of 873.990 definitions that are easy to read and process for machine learning purposes.## Contact\n\nFeel free to contact me if you are interested in contributing with either the parser, or the dataset." ]
89b8888a6a7f412c442d2983c56690a8aa6d1b6b
# Dataset of Elma This is the dataset of Elma, containing 233 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------| | raw | 233 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 531 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | raw-stage3-eyes | 615 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. | | 384x512 | 233 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x704 | 233 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x880 | 233 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 531 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 531 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-p512-640 | 431 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. | | stage3-eyes-640 | 615 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. | | stage3-eyes-800 | 615 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
CyberHarem/elma_kobayashisanchinomaidragon
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-12-27T21:19:09+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-12-27T21:21:17+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of Elma =============== This is the dataset of Elma, containing 233 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
bc7cb1d14dd3b877c6f98649208673e23fb02c68
# Dataset of Riko Saikawa This is the dataset of Riko Saikawa, containing 169 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------| | raw | 169 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 400 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | raw-stage3-eyes | 478 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. | | 384x512 | 169 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x704 | 169 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x880 | 169 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 400 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 400 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-p512-640 | 321 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. | | stage3-eyes-640 | 478 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. | | stage3-eyes-800 | 478 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
CyberHarem/riko_saikawa_kobayashisanchinomaidragon
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-12-27T21:35:29+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-12-27T21:36:47+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of Riko Saikawa ======================= This is the dataset of Riko Saikawa, containing 169 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
88db6730d3b2fca7e8ccbf4ffc1f89c2dd277518
# Dataset of Shouta Magatsuchi This is the dataset of Shouta Magatsuchi, containing 132 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by [DeepGHS Team](https://github.com/deepghs)([huggingface organization](https://huggingface.co/deepghs)). | Name | Images | Download | Description | |:----------------|---------:|:----------------------------------------|:-----------------------------------------------------------------------------------------| | raw | 132 | [Download](dataset-raw.zip) | Raw data with meta information. | | raw-stage3 | 311 | [Download](dataset-raw-stage3.zip) | 3-stage cropped raw data with meta information. | | raw-stage3-eyes | 361 | [Download](dataset-raw-stage3-eyes.zip) | 3-stage cropped (with eye-focus) raw data with meta information. | | 384x512 | 132 | [Download](dataset-384x512.zip) | 384x512 aligned dataset. | | 512x704 | 132 | [Download](dataset-512x704.zip) | 512x704 aligned dataset. | | 640x880 | 132 | [Download](dataset-640x880.zip) | 640x880 aligned dataset. | | stage3-640 | 311 | [Download](dataset-stage3-640.zip) | 3-stage cropped dataset with the shorter side not exceeding 640 pixels. | | stage3-800 | 311 | [Download](dataset-stage3-800.zip) | 3-stage cropped dataset with the shorter side not exceeding 800 pixels. | | stage3-p512-640 | 245 | [Download](dataset-stage3-p512-640.zip) | 3-stage cropped dataset with the area not less than 512x512 pixels. | | stage3-eyes-640 | 361 | [Download](dataset-stage3-eyes-640.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 640 pixels. | | stage3-eyes-800 | 361 | [Download](dataset-stage3-eyes-800.zip) | 3-stage cropped (with eye-focus) dataset with the shorter side not exceeding 800 pixels. |
CyberHarem/shouta_magatsuchi_kobayashisanchinomaidragon
[ "task_categories:text-to-image", "size_categories:n<1K", "license:mit", "art", "not-for-all-audiences", "region:us" ]
2023-12-27T21:46:47+00:00
{"license": "mit", "size_categories": ["n<1K"], "task_categories": ["text-to-image"], "tags": ["art", "not-for-all-audiences"]}
2023-12-27T21:47:46+00:00
[]
[]
TAGS #task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us
Dataset of Shouta Magatsuchi ============================ This is the dataset of Shouta Magatsuchi, containing 132 images and their tags. Images are crawled from many sites (e.g. danbooru, pixiv, zerochan ...), the auto-crawling system is powered by DeepGHS Team(huggingface organization).
[]
[ "TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
[ 44 ]
[ "passage: TAGS\n#task_categories-text-to-image #size_categories-n<1K #license-mit #art #not-for-all-audiences #region-us \n" ]
cba4983c693c0e28c0db4d37b18355ea857e10a4
This dataset was the used in the paper https://arxiv.org/abs/2311.11331 --- license: apache-2.0 ---
paulofinardi/FAQ_BACEN
[ "task_categories:text-classification", "task_categories:question-answering", "size_categories:1K<n<10K", "language:pt", "license:apache-2.0", "finance", "arxiv:2311.11331", "region:us" ]
2023-12-27T22:45:58+00:00
{"language": ["pt"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification", "question-answering"], "tags": ["finance"]}
2023-12-27T22:49:35+00:00
[ "2311.11331" ]
[ "pt" ]
TAGS #task_categories-text-classification #task_categories-question-answering #size_categories-1K<n<10K #language-Portuguese #license-apache-2.0 #finance #arxiv-2311.11331 #region-us
This dataset was the used in the paper URL --- license: apache-2.0 ---
[]
[ "TAGS\n#task_categories-text-classification #task_categories-question-answering #size_categories-1K<n<10K #language-Portuguese #license-apache-2.0 #finance #arxiv-2311.11331 #region-us \n" ]
[ 67 ]
[ "passage: TAGS\n#task_categories-text-classification #task_categories-question-answering #size_categories-1K<n<10K #language-Portuguese #license-apache-2.0 #finance #arxiv-2311.11331 #region-us \n" ]
df020b19cc21e27bf2e0bca1544c34f4b1be9792
# Albania-Parliament-Transcriptions [![Open In Colab](https://colab.research.google.com/assets/colab-badge.svg)](https://colab.research.google.com/github/KushtrimVisoka/Albania-Parliament-Transcriptions/blob/main/Albania_Parliament_Transcriptions.ipynb) The dataset comprises transcripts of speeches delivered by members of the Albanian Assembly during parliamentary sessions spanning from 2013. The goal of this repository is to provide a valuable resource for researchers and professionals interested in natural language processing, or political discourse analysis. # Data source The dataset was compiled from publicly available transcripts published on the current and old official website of the Albanian Assembly (https://parlament.al/). # Data Preperation The dataset was compiled by downloading PDF files and converting them to a text format using OCR. The resulting text was then cleaned to fix punctuation and spelling errors. It's important to note that due to the complexity of the PDF-to-text conversion process, the dataset may still contain typos and other errors. As a result, the dataset is provided "as is". # To do - [ ] Conduct additional quality assurance checks to identify and correct any remaining errors in the dataset. - [ ] Add a column for the party of the speaker. # Dataset structure The dataset contains the following fields: text, speaker, date, id, num_tokens. # Usage ```python from datasets import load_dataset dataset = load_dataset('Kushtrim/Albania-Parliament-Transcriptions') ``` # License The dataset is licensed under the Creative Commons Attribution 4.0 International License (https://creativecommons.org/licenses/by/4.0/). # Citation If you use this dataset in your research, please consider citing this repository.
Kushtrim/Albania-Parliament-Transcriptions
[ "size_categories:10K<n<100K", "source_datasets:Kuvendi i Shqipërisë", "language:sq", "license:cc-by-4.0", "region:us" ]
2023-12-27T22:48:47+00:00
{"language": "sq", "license": "cc-by-4.0", "size_categories": ["10K<n<100K"], "source_datasets": "Kuvendi i Shqip\u00ebris\u00eb", "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "speaker", "dtype": "string"}, {"name": "date", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "num_tokens", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 114268574, "num_examples": 44807}], "download_size": 61183209, "dataset_size": 114268574}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2024-02-09T10:06:01+00:00
[]
[ "sq" ]
TAGS #size_categories-10K<n<100K #source_datasets-Kuvendi i Shqipërisë #language-Albanian #license-cc-by-4.0 #region-us
# Albania-Parliament-Transcriptions ![Open In Colab](URL The dataset comprises transcripts of speeches delivered by members of the Albanian Assembly during parliamentary sessions spanning from 2013. The goal of this repository is to provide a valuable resource for researchers and professionals interested in natural language processing, or political discourse analysis. # Data source The dataset was compiled from publicly available transcripts published on the current and old official website of the Albanian Assembly (URL # Data Preperation The dataset was compiled by downloading PDF files and converting them to a text format using OCR. The resulting text was then cleaned to fix punctuation and spelling errors. It's important to note that due to the complexity of the PDF-to-text conversion process, the dataset may still contain typos and other errors. As a result, the dataset is provided "as is". # To do - [ ] Conduct additional quality assurance checks to identify and correct any remaining errors in the dataset. - [ ] Add a column for the party of the speaker. # Dataset structure The dataset contains the following fields: text, speaker, date, id, num_tokens. # Usage # License The dataset is licensed under the Creative Commons Attribution 4.0 International License (URL If you use this dataset in your research, please consider citing this repository.
[ "# Albania-Parliament-Transcriptions\n\n![Open In Colab](URL\n\nThe dataset comprises transcripts of speeches delivered by members of the Albanian Assembly during parliamentary sessions spanning from 2013. The goal of this repository is to provide a valuable resource for researchers and professionals interested in natural language processing, or political discourse analysis.", "# Data source\n\nThe dataset was compiled from publicly available transcripts published on the current and old official website of the Albanian Assembly (URL", "# Data Preperation\n\nThe dataset was compiled by downloading PDF files and converting them to a text format using OCR. The resulting text was then cleaned to fix punctuation and spelling errors. It's important to note that due to the complexity of the PDF-to-text conversion process, the dataset may still contain typos and other errors. As a result, the dataset is provided \"as is\".", "# To do\n\n- [ ] Conduct additional quality assurance checks to identify and correct any remaining errors in the dataset.\n- [ ] Add a column for the party of the speaker.", "# Dataset structure\n\nThe dataset contains the following fields: text, speaker, date, id, num_tokens.", "# Usage", "# License\n\nThe dataset is licensed under the Creative Commons Attribution 4.0 International License (URL\n\nIf you use this dataset in your research, please consider citing this repository." ]
[ "TAGS\n#size_categories-10K<n<100K #source_datasets-Kuvendi i Shqipërisë #language-Albanian #license-cc-by-4.0 #region-us \n", "# Albania-Parliament-Transcriptions\n\n![Open In Colab](URL\n\nThe dataset comprises transcripts of speeches delivered by members of the Albanian Assembly during parliamentary sessions spanning from 2013. The goal of this repository is to provide a valuable resource for researchers and professionals interested in natural language processing, or political discourse analysis.", "# Data source\n\nThe dataset was compiled from publicly available transcripts published on the current and old official website of the Albanian Assembly (URL", "# Data Preperation\n\nThe dataset was compiled by downloading PDF files and converting them to a text format using OCR. The resulting text was then cleaned to fix punctuation and spelling errors. It's important to note that due to the complexity of the PDF-to-text conversion process, the dataset may still contain typos and other errors. As a result, the dataset is provided \"as is\".", "# To do\n\n- [ ] Conduct additional quality assurance checks to identify and correct any remaining errors in the dataset.\n- [ ] Add a column for the party of the speaker.", "# Dataset structure\n\nThe dataset contains the following fields: text, speaker, date, id, num_tokens.", "# Usage", "# License\n\nThe dataset is licensed under the Creative Commons Attribution 4.0 International License (URL\n\nIf you use this dataset in your research, please consider citing this repository." ]
[ 44, 81, 33, 96, 43, 27, 3, 37 ]
[ "passage: TAGS\n#size_categories-10K<n<100K #source_datasets-Kuvendi i Shqipërisë #language-Albanian #license-cc-by-4.0 #region-us \n# Albania-Parliament-Transcriptions\n\n![Open In Colab](URL\n\nThe dataset comprises transcripts of speeches delivered by members of the Albanian Assembly during parliamentary sessions spanning from 2013. The goal of this repository is to provide a valuable resource for researchers and professionals interested in natural language processing, or political discourse analysis.# Data source\n\nThe dataset was compiled from publicly available transcripts published on the current and old official website of the Albanian Assembly (URL# Data Preperation\n\nThe dataset was compiled by downloading PDF files and converting them to a text format using OCR. The resulting text was then cleaned to fix punctuation and spelling errors. It's important to note that due to the complexity of the PDF-to-text conversion process, the dataset may still contain typos and other errors. As a result, the dataset is provided \"as is\".# To do\n\n- [ ] Conduct additional quality assurance checks to identify and correct any remaining errors in the dataset.\n- [ ] Add a column for the party of the speaker.# Dataset structure\n\nThe dataset contains the following fields: text, speaker, date, id, num_tokens.# Usage# License\n\nThe dataset is licensed under the Creative Commons Attribution 4.0 International License (URL\n\nIf you use this dataset in your research, please consider citing this repository." ]
b7671d86b4819ab52f89fec3971f646092ee9250
This dataset was used in the article: https://arxiv.org/abs/2311.11331
Itau-Unibanco/FAQ_BACEN
[ "task_categories:text-classification", "task_categories:question-answering", "size_categories:1K<n<10K", "language:pt", "license:apache-2.0", "finance", "arxiv:2311.11331", "region:us" ]
2023-12-27T22:51:20+00:00
{"language": ["pt"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification", "question-answering"], "tags": ["finance"]}
2023-12-27T22:53:37+00:00
[ "2311.11331" ]
[ "pt" ]
TAGS #task_categories-text-classification #task_categories-question-answering #size_categories-1K<n<10K #language-Portuguese #license-apache-2.0 #finance #arxiv-2311.11331 #region-us
This dataset was used in the article: URL
[]
[ "TAGS\n#task_categories-text-classification #task_categories-question-answering #size_categories-1K<n<10K #language-Portuguese #license-apache-2.0 #finance #arxiv-2311.11331 #region-us \n" ]
[ 67 ]
[ "passage: TAGS\n#task_categories-text-classification #task_categories-question-answering #size_categories-1K<n<10K #language-Portuguese #license-apache-2.0 #finance #arxiv-2311.11331 #region-us \n" ]
3577a939eb90c063c3f1aa0efa3c41737a99b07d
# 2WikiMultihopQA Dataset with GPT-3.5 Generated Questions ## Overview This repository hosts an enhanced version of the 2WikiMultihopQA dataset, where each supporting sentence in the dataset has been supplemented with questions generated using OpenAI's GPT-3.5 turbo API. The aim is to provide a richer context for each entry, potentially benefiting various NLP tasks, such as question answering and context understanding. ## Dataset Format Each entry in the dataset is formatted as follows: ```json { "_id": "example_id", "type": "sample_type", "question": "Sample question text?", "context": { "title": ["Title 1", "Title 2"], "content": [ [["Content 1 for Title 1","Content 2 for Title 1"]], [["Content 1 for Title 2"]] ], "questions": [ // newly added [["Question 1 for Title 1"],["Question 2 for Title 1"]], [["Question 1 for Title 2"]] ], "paraphrased_questions": [ // newly added [["Paraphrased Question 1 for Title 1"],["Paraphrased Question 2 for Title 1"]], [["Paraphrased Question 1 for Title 2"]] ] }, "supporting_facts": { "title": ["Title 1", "Title 2"], "sent_id": [0, 0] }, "evidences": { "fact": ["Fact 1", "Fact 2"], "relation": ["relation_1", "relation_2"], "entity": ["Entity 1", "Entity 2"] }, "answer": "sample_answer" } ``` ## Important Notices ### 1. Training Split Unavailability As of now, the training split of this enhanced dataset is still under computation and is not available. We are actively working on this and will update the repository once it's ready. ### 2. Commercial Usage Caution Users of this dataset should be aware that the questions generated by OpenAI's GPT-3.5 turbo API may not be suitable for commercial use, as per the OpenAI terms of service. We advise caution and suggest reviewing OpenAI's policies before any commercial deployment. ### 3. Citation for Original Dataset This enhanced dataset is based on the 2wikimultihop dataset. Users of this enhanced dataset should also cite the original 2wikimultihop dataset. For more information about the original dataset, please visit [2wikimultihop Dataset on Github](https://github.com/Alab-NII/2wikimultihop). ## Acknowledgements This dataset enhancement was made possible by OpenAI's GPT-3.5 turbo API, and the original dataset was provided by the creators of 2wikimultihop. We thank both parties for their contributions to the field of natural language processing and machine learning.
scholarly-shadows-syndicate/2wikimultihopqa_with_q_gpt35
[ "license:apache-2.0", "region:us" ]
2023-12-27T23:12:32+00:00
{"license": "apache-2.0"}
2024-01-14T23:42:54+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
# 2WikiMultihopQA Dataset with GPT-3.5 Generated Questions ## Overview This repository hosts an enhanced version of the 2WikiMultihopQA dataset, where each supporting sentence in the dataset has been supplemented with questions generated using OpenAI's GPT-3.5 turbo API. The aim is to provide a richer context for each entry, potentially benefiting various NLP tasks, such as question answering and context understanding. ## Dataset Format Each entry in the dataset is formatted as follows: ## Important Notices ### 1. Training Split Unavailability As of now, the training split of this enhanced dataset is still under computation and is not available. We are actively working on this and will update the repository once it's ready. ### 2. Commercial Usage Caution Users of this dataset should be aware that the questions generated by OpenAI's GPT-3.5 turbo API may not be suitable for commercial use, as per the OpenAI terms of service. We advise caution and suggest reviewing OpenAI's policies before any commercial deployment. ### 3. Citation for Original Dataset This enhanced dataset is based on the 2wikimultihop dataset. Users of this enhanced dataset should also cite the original 2wikimultihop dataset. For more information about the original dataset, please visit 2wikimultihop Dataset on Github. ## Acknowledgements This dataset enhancement was made possible by OpenAI's GPT-3.5 turbo API, and the original dataset was provided by the creators of 2wikimultihop. We thank both parties for their contributions to the field of natural language processing and machine learning.
[ "# 2WikiMultihopQA Dataset with GPT-3.5 Generated Questions", "## Overview\n\nThis repository hosts an enhanced version of the 2WikiMultihopQA dataset, where each supporting sentence in the dataset has been supplemented with questions generated using OpenAI's GPT-3.5 turbo API. The aim is to provide a richer context for each entry, potentially benefiting various NLP tasks, such as question answering and context understanding.", "## Dataset Format\n\nEach entry in the dataset is formatted as follows:", "## Important Notices", "### 1. Training Split Unavailability\n\nAs of now, the training split of this enhanced dataset is still under computation and is not available. We are actively working on this and will update the repository once it's ready.", "### 2. Commercial Usage Caution\n\nUsers of this dataset should be aware that the questions generated by OpenAI's GPT-3.5 turbo API may not be suitable for commercial use, as per the OpenAI terms of service. We advise caution and suggest reviewing OpenAI's policies before any commercial deployment.", "### 3. Citation for Original Dataset\n\nThis enhanced dataset is based on the 2wikimultihop dataset. Users of this enhanced dataset should also cite the original 2wikimultihop dataset. For more information about the original dataset, please visit 2wikimultihop Dataset on Github.", "## Acknowledgements\n\nThis dataset enhancement was made possible by OpenAI's GPT-3.5 turbo API, and the original dataset was provided by the creators of 2wikimultihop. We thank both parties for their contributions to the field of natural language processing and machine learning." ]
[ "TAGS\n#license-apache-2.0 #region-us \n", "# 2WikiMultihopQA Dataset with GPT-3.5 Generated Questions", "## Overview\n\nThis repository hosts an enhanced version of the 2WikiMultihopQA dataset, where each supporting sentence in the dataset has been supplemented with questions generated using OpenAI's GPT-3.5 turbo API. The aim is to provide a richer context for each entry, potentially benefiting various NLP tasks, such as question answering and context understanding.", "## Dataset Format\n\nEach entry in the dataset is formatted as follows:", "## Important Notices", "### 1. Training Split Unavailability\n\nAs of now, the training split of this enhanced dataset is still under computation and is not available. We are actively working on this and will update the repository once it's ready.", "### 2. Commercial Usage Caution\n\nUsers of this dataset should be aware that the questions generated by OpenAI's GPT-3.5 turbo API may not be suitable for commercial use, as per the OpenAI terms of service. We advise caution and suggest reviewing OpenAI's policies before any commercial deployment.", "### 3. Citation for Original Dataset\n\nThis enhanced dataset is based on the 2wikimultihop dataset. Users of this enhanced dataset should also cite the original 2wikimultihop dataset. For more information about the original dataset, please visit 2wikimultihop Dataset on Github.", "## Acknowledgements\n\nThis dataset enhancement was made possible by OpenAI's GPT-3.5 turbo API, and the original dataset was provided by the creators of 2wikimultihop. We thank both parties for their contributions to the field of natural language processing and machine learning." ]
[ 14, 18, 86, 17, 5, 52, 73, 67, 62 ]
[ "passage: TAGS\n#license-apache-2.0 #region-us \n# 2WikiMultihopQA Dataset with GPT-3.5 Generated Questions## Overview\n\nThis repository hosts an enhanced version of the 2WikiMultihopQA dataset, where each supporting sentence in the dataset has been supplemented with questions generated using OpenAI's GPT-3.5 turbo API. The aim is to provide a richer context for each entry, potentially benefiting various NLP tasks, such as question answering and context understanding.## Dataset Format\n\nEach entry in the dataset is formatted as follows:## Important Notices### 1. Training Split Unavailability\n\nAs of now, the training split of this enhanced dataset is still under computation and is not available. We are actively working on this and will update the repository once it's ready.### 2. Commercial Usage Caution\n\nUsers of this dataset should be aware that the questions generated by OpenAI's GPT-3.5 turbo API may not be suitable for commercial use, as per the OpenAI terms of service. We advise caution and suggest reviewing OpenAI's policies before any commercial deployment.### 3. Citation for Original Dataset\n\nThis enhanced dataset is based on the 2wikimultihop dataset. Users of this enhanced dataset should also cite the original 2wikimultihop dataset. For more information about the original dataset, please visit 2wikimultihop Dataset on Github.## Acknowledgements\n\nThis dataset enhancement was made possible by OpenAI's GPT-3.5 turbo API, and the original dataset was provided by the creators of 2wikimultihop. We thank both parties for their contributions to the field of natural language processing and machine learning." ]
0f78e815bf4ba9ca0addf3484ee195d526c506a4
# Massive Korean synthetic dataset This dataset is a large-scale Korean artificial data set created using Gemini Pro. It was created using the methodology described in *Creation of synthetic textbook-quality datasets* in [Textbooks Are All You Need](https://arxiv.org/abs/2306.11644). ## Data overview **A subset of each dataset does not indicate the contents of that dataset.** **Further modification required before use this dataset for training.** **본 데이터셋은 바로 사용하기보다는 하고자하는 task에 맞추어 가공 후 사용을 권장드립니다. ex) 로컬 모델을 사용하여 QA 셋으로 변환** | subset | row count | link | + | |---|---|---|---| | tiny-textbooks | 395,985 | [nampdn-ai/tiny-textbooks](https://huggingface.co/datasets/nampdn-ai/tiny-textbooks) | | | ko_wikidata | 127,614 | [maywell/ko_wikidata_QA](https://huggingface.co/datasets/maywell/ko_wikidata_QA) | | | normal_instructions | 240,523 | [KonstantyM/science_qa](https://huggingface.co/datasets/KonstantyM/science_qa) | with science texts | | claude_evol | 239,102 | [Norquinal/claude_evol_instruct_210k](https://huggingface.co/datasets/Norquinal/claude_evol_instruct_210k) | used 250k files from that repo | | code-alpaca | 64,112 | [theblackcat102/evol-codealpaca-v1](https://huggingface.co/datasets/theblackcat102/evol-codealpaca-v1) | original is a coding dataset, but generated data is not mainly a coding dataset | | helpsteer | 25,253 | [nvidia/HelpSteer](https://huggingface.co/datasets/nvidia/HelpSteer) | | | mmlu_abstract_algebra | 88,848 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | | mmlu_all | 97,765 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | | mmlu_anatomy | 97,463 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | | mmlu_astronomy | 97,347 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | | mmlu_business_ethics | 97,327 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | | mmlu_clinical_knowledge | 97,226 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | | mmlu_college_biology | 97,285 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | | mmlu_college_chemistry | 97,435 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | | mmlu_college_computer_science | 92,606 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | | mmlu_college_mathematics | 94,070 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | | mmlu_college_medicine | 95,156 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | | mmlu_college_physics | 97,452 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | | mmlu_computer_security | 97,212 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | | mmlu_conceptual_physics | 88,216 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | | mmlu_econometrics | 91,854 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | | mmlu_electrical_engineering | 87,826 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | | mmlu_elementary_mathematics | 89,307 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | | mmlu_formal_logic | 95,483 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | | mmlu_global_facts | 94,984 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | | mmlu_high_school_biology | 97,117 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | | mmlu_high_school_chemistry | 96,907 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | | mmlu_high_school_computer_science | 97,351 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | | mmlu_high_school_european_history | 97,222 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | | mmlu_high_school_geography | 97,261 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | | mmlu_high_school_government_and_politics | 97,311 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | | mmlu_high_school_macroeconomics | 97,400 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | | mmlu_high_school_mathematics | 97,396 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | | mmlu_high_school_microeconomics | 97,435 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | | mmlu_high_school_physics | 95,740 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | | mmlu_high_school_psychology | 80,626 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | | mmlu_high_school_statistics | 76,033 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | | mmlu_high_school_us_history | 79,322 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | | mmlu_high_school_world_history | 85,990 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | | mmlu_human_aging | 78,341 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | | mmlu_human_sexuality | 79,327 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | | mmlu_international_law | 78,989 | [cais/mmlu](https://huggingface.co/datasets/cais/mmlu) | | ## When you find a problem If you find any issues with the dataset, please let us know in the discussion or send us a pull request.
maywell/korean_textbooks
[ "size_categories:1M<n<10M", "language:ko", "license:apache-2.0", "arxiv:2306.11644", "region:us" ]
2023-12-27T23:13:45+00:00
{"language": ["ko"], "license": "apache-2.0", "size_categories": ["1M<n<10M"], "pretty_name": "\ub300\uaddc\ubaa8 \ud55c\uad6d\uc5b4 Synthetic \ub370\uc774\ud130", "dataset_info": [{"config_name": "claude_evol", "features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 992896186, "num_examples": 239102}], "download_size": 380188122, "dataset_size": 992896186}, {"config_name": "code-alpaca", "features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 273836723, "num_examples": 64112}], "download_size": 100817441, "dataset_size": 273836723}, {"config_name": "helpsteer", "features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 101753037, "num_examples": 25253}], "download_size": 38660919, "dataset_size": 101753037}, {"config_name": "ko_wikidata", "features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 527306289, "num_examples": 127614}], "download_size": 197029339, "dataset_size": 527306289}, {"config_name": "mmlu_abstract_algebra", "features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 369008992, "num_examples": 88848}], "download_size": 135822870, "dataset_size": 369008992}, {"config_name": "mmlu_all", "features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 406126621, "num_examples": 97765}], "download_size": 149486712, "dataset_size": 406126621}, {"config_name": "mmlu_anatomy", "features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 404317465, "num_examples": 97463}], "download_size": 148806011, "dataset_size": 404317465}, {"config_name": "mmlu_astronomy", "features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 404137638, "num_examples": 97347}], "download_size": 148705490, "dataset_size": 404137638}, {"config_name": "mmlu_business_ethics", "features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 404250245, "num_examples": 97327}], "download_size": 148763276, "dataset_size": 404250245}, {"config_name": "mmlu_clinical_knowledge", "features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 403659005, "num_examples": 97226}], "download_size": 148688069, "dataset_size": 403659005}, {"config_name": "mmlu_college_biology", "features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 404028634, "num_examples": 97285}], "download_size": 148722802, "dataset_size": 404028634}, {"config_name": "mmlu_college_chemistry", "features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 404667385, "num_examples": 97435}], "download_size": 148855223, "dataset_size": 404667385}, {"config_name": "mmlu_college_computer_science", "features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 385176880, "num_examples": 92606}], "download_size": 141868873, "dataset_size": 385176880}, {"config_name": "mmlu_college_mathematics", "features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 390603751, "num_examples": 94070}], "download_size": 143833823, "dataset_size": 390603751}, {"config_name": "mmlu_college_medicine", "features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 395144479, "num_examples": 95156}], "download_size": 145271248, "dataset_size": 395144479}, {"config_name": "mmlu_college_physics", "features": [{"name": "0", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 404906114, "num_examples": 97452}], "download_size": 148870088, "dataset_size": 404906114}, {"config_name": "mmlu_computer_security", "features": [{"name": "0", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 403699674, "num_examples": 97212}], "download_size": 148755211, "dataset_size": 403699674}, {"config_name": "mmlu_conceptual_physics", "features": [{"name": "0", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 366231421, "num_examples": 88216}], "download_size": 134989933, "dataset_size": 366231421}, {"config_name": "mmlu_econometrics", "features": [{"name": "0", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 380851762, "num_examples": 91854}], "download_size": 140295665, "dataset_size": 380851762}, {"config_name": "mmlu_electrical_engineering", "features": [{"name": "0", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 364564129, "num_examples": 87826}], "download_size": 134376902, "dataset_size": 364564129}, {"config_name": "mmlu_elementary_mathematics", "features": [{"name": "0", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 371101672, "num_examples": 89307}], "download_size": 136622044, "dataset_size": 371101672}, {"config_name": "mmlu_formal_logic", "features": [{"name": "0", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 395937096, "num_examples": 95483}], "download_size": 145736493, "dataset_size": 395937096}, {"config_name": "mmlu_global_facts", "features": [{"name": "0", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 394596084, "num_examples": 94984}], "download_size": 145284966, "dataset_size": 394596084}, {"config_name": "mmlu_high_school_biology", "features": [{"name": "0", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 402382699, "num_examples": 97117}], "download_size": 148038235, "dataset_size": 402382699}, {"config_name": "mmlu_high_school_chemistry", "features": [{"name": "0", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 402886667, "num_examples": 96907}], "download_size": 148323317, "dataset_size": 402886667}, {"config_name": "mmlu_high_school_computer_science", "features": [{"name": "0", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 403966380, "num_examples": 97351}], "download_size": 148666121, "dataset_size": 403966380}, {"config_name": "mmlu_high_school_european_history", "features": [{"name": "0", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 403671884, "num_examples": 97222}], "download_size": 148454177, "dataset_size": 403671884}, {"config_name": "mmlu_high_school_geography", "features": [{"name": "0", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 404040602, "num_examples": 97261}], "download_size": 148657890, "dataset_size": 404040602}, {"config_name": "mmlu_high_school_government_and_politics", "features": [{"name": "0", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 403990139, "num_examples": 97311}], "download_size": 148568388, "dataset_size": 403990139}, {"config_name": "mmlu_high_school_macroeconomics", "features": [{"name": "0", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 404170166, "num_examples": 97400}], "download_size": 148591243, "dataset_size": 404170166}, {"config_name": "mmlu_high_school_mathematics", "features": [{"name": "0", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 404846407, "num_examples": 97396}], "download_size": 149076619, "dataset_size": 404846407}, {"config_name": "mmlu_high_school_microeconomics", "features": [{"name": "0", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 404613760, "num_examples": 97435}], "download_size": 148970422, "dataset_size": 404613760}, {"config_name": "mmlu_high_school_physics", "features": [{"name": "0", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 397678253, "num_examples": 95740}], "download_size": 146340167, "dataset_size": 397678253}, {"config_name": "mmlu_high_school_psychology", "features": [{"name": "0", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 334767526, "num_examples": 80626}], "download_size": 123054403, "dataset_size": 334767526}, {"config_name": "mmlu_high_school_statistics", "features": [{"name": "0", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 315209112, "num_examples": 76033}], "download_size": 115876698, "dataset_size": 315209112}, {"config_name": "mmlu_high_school_us_history", "features": [{"name": "0", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 329179309, "num_examples": 79322}], "download_size": 120972668, "dataset_size": 329179309}, {"config_name": "mmlu_high_school_world_history", "features": [{"name": "0", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 357910528, "num_examples": 85990}], "download_size": 131809165, "dataset_size": 357910528}, {"config_name": "mmlu_human_aging", "features": [{"name": "0", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 325427761, "num_examples": 78341}], "download_size": 119430234, "dataset_size": 325427761}, {"config_name": "mmlu_human_sexuality", "features": [{"name": "0", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 328912659, "num_examples": 79327}], "download_size": 121032722, "dataset_size": 328912659}, {"config_name": "mmlu_international_law", "features": [{"name": "0", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 327874597, "num_examples": 78989}], "download_size": 120785769, "dataset_size": 327874597}, {"config_name": "normal_instructions", "features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 956305865, "num_examples": 240523}], "download_size": 362796244, "dataset_size": 956305865}, {"config_name": "tiny-textbooks", "features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1722063576, "num_examples": 395985}], "download_size": 635724860, "dataset_size": 1722063576}], "configs": [{"config_name": "claude_evol", "data_files": [{"split": "train", "path": "claude_evol/train-*"}]}, {"config_name": "code-alpaca", "data_files": [{"split": "train", "path": "code-alpaca/train-*"}]}, {"config_name": "helpsteer", "data_files": [{"split": "train", "path": "helpsteer/train-*"}]}, {"config_name": "ko_wikidata", "data_files": [{"split": "train", "path": "ko_wikidata/train-*"}]}, {"config_name": "mmlu_abstract_algebra", "data_files": [{"split": "train", "path": "mmlu_abstract_algebra/train-*"}]}, {"config_name": "mmlu_all", "data_files": [{"split": "train", "path": "mmlu_all/train-*"}]}, {"config_name": "mmlu_anatomy", "data_files": [{"split": "train", "path": "mmlu_anatomy/train-*"}]}, {"config_name": "mmlu_astronomy", "data_files": [{"split": "train", "path": "mmlu_astronomy/train-*"}]}, {"config_name": "mmlu_business_ethics", "data_files": [{"split": "train", "path": "mmlu_business_ethics/train-*"}]}, {"config_name": "mmlu_clinical_knowledge", "data_files": [{"split": "train", "path": "mmlu_clinical_knowledge/train-*"}]}, {"config_name": "mmlu_college_biology", "data_files": [{"split": "train", "path": "mmlu_college_biology/train-*"}]}, {"config_name": "mmlu_college_chemistry", "data_files": [{"split": "train", "path": "mmlu_college_chemistry/train-*"}]}, {"config_name": "mmlu_college_computer_science", "data_files": [{"split": "train", "path": "mmlu_college_computer_science/train-*"}]}, {"config_name": "mmlu_college_mathematics", "data_files": [{"split": "train", "path": "mmlu_college_mathematics/train-*"}]}, {"config_name": "mmlu_college_medicine", "data_files": [{"split": "train", "path": "mmlu_college_medicine/train-*"}]}, {"config_name": "mmlu_college_physics", "data_files": [{"split": "train", "path": "mmlu_college_physics/train-*"}]}, {"config_name": "mmlu_computer_security", "data_files": [{"split": "train", "path": "mmlu_computer_security/train-*"}]}, {"config_name": "mmlu_conceptual_physics", "data_files": [{"split": "train", "path": "mmlu_conceptual_physics/train-*"}]}, {"config_name": "mmlu_econometrics", "data_files": [{"split": "train", "path": "mmlu_econometrics/train-*"}]}, {"config_name": "mmlu_electrical_engineering", "data_files": [{"split": "train", "path": "mmlu_electrical_engineering/train-*"}]}, {"config_name": "mmlu_elementary_mathematics", "data_files": [{"split": "train", "path": "mmlu_elementary_mathematics/train-*"}]}, {"config_name": "mmlu_formal_logic", "data_files": [{"split": "train", "path": "mmlu_formal_logic/train-*"}]}, {"config_name": "mmlu_global_facts", "data_files": [{"split": "train", "path": "mmlu_global_facts/train-*"}]}, {"config_name": "mmlu_high_school_biology", "data_files": [{"split": "train", "path": "mmlu_high_school_biology/train-*"}]}, {"config_name": "mmlu_high_school_chemistry", "data_files": [{"split": "train", "path": "mmlu_high_school_chemistry/train-*"}]}, {"config_name": "mmlu_high_school_computer_science", "data_files": [{"split": "train", "path": "mmlu_high_school_computer_science/train-*"}]}, {"config_name": "mmlu_high_school_european_history", "data_files": [{"split": "train", "path": "mmlu_high_school_european_history/train-*"}]}, {"config_name": "mmlu_high_school_geography", "data_files": [{"split": "train", "path": "mmlu_high_school_geography/train-*"}]}, {"config_name": "mmlu_high_school_government_and_politics", "data_files": [{"split": "train", "path": "mmlu_high_school_government_and_politics/train-*"}]}, {"config_name": "mmlu_high_school_macroeconomics", "data_files": [{"split": "train", "path": "mmlu_high_school_macroeconomics/train-*"}]}, {"config_name": "mmlu_high_school_mathematics", "data_files": [{"split": "train", "path": "mmlu_high_school_mathematics/train-*"}]}, {"config_name": "mmlu_high_school_microeconomics", "data_files": [{"split": "train", "path": "mmlu_high_school_microeconomics/train-*"}]}, {"config_name": "mmlu_high_school_physics", "data_files": [{"split": "train", "path": "mmlu_high_school_physics/train-*"}]}, {"config_name": "mmlu_high_school_psychology", "data_files": [{"split": "train", "path": "mmlu_high_school_psychology/train-*"}]}, {"config_name": "mmlu_high_school_statistics", "data_files": [{"split": "train", "path": "mmlu_high_school_statistics/train-*"}]}, {"config_name": "mmlu_high_school_us_history", "data_files": [{"split": "train", "path": "mmlu_high_school_us_history/train-*"}]}, {"config_name": "mmlu_high_school_world_history", "data_files": [{"split": "train", "path": "mmlu_high_school_world_history/train-*"}]}, {"config_name": "mmlu_human_aging", "data_files": [{"split": "train", "path": "mmlu_human_aging/train-*"}]}, {"config_name": "mmlu_human_sexuality", "data_files": [{"split": "train", "path": "mmlu_human_sexuality/train-*"}]}, {"config_name": "mmlu_international_law", "data_files": [{"split": "train", "path": "mmlu_international_law/train-*"}]}, {"config_name": "normal_instructions", "data_files": [{"split": "train", "path": "normal_instructions/train-*"}]}, {"config_name": "tiny-textbooks", "data_files": [{"split": "train", "path": "tiny-textbooks/train-*"}]}]}
2024-01-10T09:21:36+00:00
[ "2306.11644" ]
[ "ko" ]
TAGS #size_categories-1M<n<10M #language-Korean #license-apache-2.0 #arxiv-2306.11644 #region-us
Massive Korean synthetic dataset ================================ This dataset is a large-scale Korean artificial data set created using Gemini Pro. It was created using the methodology described in *Creation of synthetic textbook-quality datasets* in Textbooks Are All You Need. Data overview ------------- A subset of each dataset does not indicate the contents of that dataset. Further modification required before use this dataset for training. 본 데이터셋은 바로 사용하기보다는 하고자하는 task에 맞추어 가공 후 사용을 권장드립니다. ex) 로컬 모델을 사용하여 QA 셋으로 변환 When you find a problem ----------------------- If you find any issues with the dataset, please let us know in the discussion or send us a pull request.
[]
[ "TAGS\n#size_categories-1M<n<10M #language-Korean #license-apache-2.0 #arxiv-2306.11644 #region-us \n" ]
[ 39 ]
[ "passage: TAGS\n#size_categories-1M<n<10M #language-Korean #license-apache-2.0 #arxiv-2306.11644 #region-us \n" ]
fea5d885b288d5100c7e30f1ea98bc7eb6f00641
This is a fork from https://huggingface.co/datasets/substratusai/k8s-instructions
sozercan/k8s-instructions
[ "license:apache-2.0", "region:us" ]
2023-12-27T23:36:56+00:00
{"license": "apache-2.0"}
2023-12-27T23:56:09+00:00
[]
[]
TAGS #license-apache-2.0 #region-us
This is a fork from URL
[]
[ "TAGS\n#license-apache-2.0 #region-us \n" ]
[ 14 ]
[ "passage: TAGS\n#license-apache-2.0 #region-us \n" ]
417ca1c9ab629b1a19699d62e1874cd78177f5f8
### Dataset Description: College and University Financial Audits and Text Dataset #### **Overview** The College and University Financial Audits and Text Dataset is a dataset that combines audited financial statement texts with key extracted financial data from various colleges and universities. This dataset stands out for its emphasis on long context input data, making it especially useful for training large language models to extract data from long input texts. #### **Dataset Composition** This dataset contains over 125,000 entries with the following variables: 1. **AuditText**: The complete text of the audited financial statement. These can be very long input texts. 2. **fiscalYear**: The fiscal year of the reported data. 3. **auditYear**: The year in which the audit was performed. 4. **value**: Value of the variable in fiscal year==fiscalYear as reported in the audit text. 5. **variable**: Variable extracted from the audit text. #### **Key Features and Use Cases** - **Long Context Input Data**: The inclusion of complete audit texts offers a unique opportunity for training and testing LLMs on long-form financial documents. - **Model Training and fine-tuning**: Ideal for fine-tuning AI models for data extraction from long textual inputs, such as in-depth financial reports. - **Financial and Educational Research**: Facilitates comprehensive analysis in the domains of finance and education, allowing for nuanced insights into the financial workings of higher education institutions. #### **Suggested prompt format for training** **Prompt**: "Extract the {variable} for fiscal year {fiscalYear} from the following text: \n\n {AuditText}" **Response**: "{value}"
PDScience/CollegeAuditData
[ "task_categories:question-answering", "task_categories:feature-extraction", "task_categories:text-generation", "size_categories:100K<n<1M", "language:en", "license:apache-2.0", "finance", "region:us" ]
2023-12-27T23:45:36+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["question-answering", "feature-extraction", "text-generation"], "pretty_name": "College Audit Data", "tags": ["finance"]}
2024-01-01T02:09:22+00:00
[]
[ "en" ]
TAGS #task_categories-question-answering #task_categories-feature-extraction #task_categories-text-generation #size_categories-100K<n<1M #language-English #license-apache-2.0 #finance #region-us
### Dataset Description: College and University Financial Audits and Text Dataset #### Overview The College and University Financial Audits and Text Dataset is a dataset that combines audited financial statement texts with key extracted financial data from various colleges and universities. This dataset stands out for its emphasis on long context input data, making it especially useful for training large language models to extract data from long input texts. #### Dataset Composition This dataset contains over 125,000 entries with the following variables: 1. AuditText: The complete text of the audited financial statement. These can be very long input texts. 2. fiscalYear: The fiscal year of the reported data. 3. auditYear: The year in which the audit was performed. 4. value: Value of the variable in fiscal year==fiscalYear as reported in the audit text. 5. variable: Variable extracted from the audit text. #### Key Features and Use Cases - Long Context Input Data: The inclusion of complete audit texts offers a unique opportunity for training and testing LLMs on long-form financial documents. - Model Training and fine-tuning: Ideal for fine-tuning AI models for data extraction from long textual inputs, such as in-depth financial reports. - Financial and Educational Research: Facilitates comprehensive analysis in the domains of finance and education, allowing for nuanced insights into the financial workings of higher education institutions. #### Suggested prompt format for training Prompt: "Extract the {variable} for fiscal year {fiscalYear} from the following text: \n\n {AuditText}" Response: "{value}"
[ "### Dataset Description: College and University Financial Audits and Text Dataset", "#### Overview\n\nThe College and University Financial Audits and Text Dataset is a dataset that combines audited financial statement texts with key extracted financial data from various colleges and universities. This dataset stands out for its emphasis on long context input data, making it especially useful for training large language models to extract data from long input texts.", "#### Dataset Composition\n\nThis dataset contains over 125,000 entries with the following variables:\n\n1. AuditText: The complete text of the audited financial statement. These can be very long input texts.\n2. fiscalYear: The fiscal year of the reported data.\n3. auditYear: The year in which the audit was performed.\n4. value: Value of the variable in fiscal year==fiscalYear as reported in the audit text.\n5. variable: Variable extracted from the audit text.", "#### Key Features and Use Cases\n\n- Long Context Input Data: The inclusion of complete audit texts offers a unique opportunity for training and testing LLMs on long-form financial documents. \n- Model Training and fine-tuning: Ideal for fine-tuning AI models for data extraction from long textual inputs, such as in-depth financial reports.\n- Financial and Educational Research: Facilitates comprehensive analysis in the domains of finance and education, allowing for nuanced insights into the financial workings of higher education institutions.", "#### Suggested prompt format for training\n\nPrompt: \"Extract the {variable} for fiscal year {fiscalYear} from the following text: \\n\\n {AuditText}\"\n\nResponse: \"{value}\"" ]
[ "TAGS\n#task_categories-question-answering #task_categories-feature-extraction #task_categories-text-generation #size_categories-100K<n<1M #language-English #license-apache-2.0 #finance #region-us \n", "### Dataset Description: College and University Financial Audits and Text Dataset", "#### Overview\n\nThe College and University Financial Audits and Text Dataset is a dataset that combines audited financial statement texts with key extracted financial data from various colleges and universities. This dataset stands out for its emphasis on long context input data, making it especially useful for training large language models to extract data from long input texts.", "#### Dataset Composition\n\nThis dataset contains over 125,000 entries with the following variables:\n\n1. AuditText: The complete text of the audited financial statement. These can be very long input texts.\n2. fiscalYear: The fiscal year of the reported data.\n3. auditYear: The year in which the audit was performed.\n4. value: Value of the variable in fiscal year==fiscalYear as reported in the audit text.\n5. variable: Variable extracted from the audit text.", "#### Key Features and Use Cases\n\n- Long Context Input Data: The inclusion of complete audit texts offers a unique opportunity for training and testing LLMs on long-form financial documents. \n- Model Training and fine-tuning: Ideal for fine-tuning AI models for data extraction from long textual inputs, such as in-depth financial reports.\n- Financial and Educational Research: Facilitates comprehensive analysis in the domains of finance and education, allowing for nuanced insights into the financial workings of higher education institutions.", "#### Suggested prompt format for training\n\nPrompt: \"Extract the {variable} for fiscal year {fiscalYear} from the following text: \\n\\n {AuditText}\"\n\nResponse: \"{value}\"" ]
[ 68, 16, 75, 108, 118, 55 ]
[ "passage: TAGS\n#task_categories-question-answering #task_categories-feature-extraction #task_categories-text-generation #size_categories-100K<n<1M #language-English #license-apache-2.0 #finance #region-us \n### Dataset Description: College and University Financial Audits and Text Dataset#### Overview\n\nThe College and University Financial Audits and Text Dataset is a dataset that combines audited financial statement texts with key extracted financial data from various colleges and universities. This dataset stands out for its emphasis on long context input data, making it especially useful for training large language models to extract data from long input texts.#### Dataset Composition\n\nThis dataset contains over 125,000 entries with the following variables:\n\n1. AuditText: The complete text of the audited financial statement. These can be very long input texts.\n2. fiscalYear: The fiscal year of the reported data.\n3. auditYear: The year in which the audit was performed.\n4. value: Value of the variable in fiscal year==fiscalYear as reported in the audit text.\n5. variable: Variable extracted from the audit text.#### Key Features and Use Cases\n\n- Long Context Input Data: The inclusion of complete audit texts offers a unique opportunity for training and testing LLMs on long-form financial documents. \n- Model Training and fine-tuning: Ideal for fine-tuning AI models for data extraction from long textual inputs, such as in-depth financial reports.\n- Financial and Educational Research: Facilitates comprehensive analysis in the domains of finance and education, allowing for nuanced insights into the financial workings of higher education institutions.#### Suggested prompt format for training\n\nPrompt: \"Extract the {variable} for fiscal year {fiscalYear} from the following text: \\n\\n {AuditText}\"\n\nResponse: \"{value}\"" ]
35f15f9fe2abd7b5a596d2116919c1c02c3f4abf
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
Omega02gdfdd/bioclip-demo-open-domain-mistakes
[ "region:us" ]
2023-12-28T00:55:48+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data.csv"}]}]}
2024-01-19T01:38:29+00:00
[]
[]
TAGS #region-us
# Dataset Card for Dataset Name ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Dataset Name", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 8, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Dataset Name## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
12b0313ba4c3189ee5a24cb76200959e9bf7492e
# Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
Omega02gdfdd/bioclip-demo-zero-shot-mistakes
[ "region:us" ]
2023-12-28T00:56:52+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data.csv"}]}]}
2024-01-19T01:38:31+00:00
[]
[]
TAGS #region-us
# Dataset Card for Dataset Name ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Dataset Name", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Dataset Name", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 8, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Dataset Name## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
54918560ec8b78f601f3df0e779a5d1270e71fbe
# Dataset Description - **Curated by:** Zhang Xin from Beihang University (BUAA). The dataset was created using an AI tool to generate summaries of Wikipedia articles, aiming to support NLP research and applications, especially in the context of language processing. - **Funded by:** The creation of this dataset was internally supported by Beihang University as a part of academic research initiatives. - **Shared by:** Zhang Xin from the Department of Computer Science, Beihang University. - **Language(s) (NLP):** English - **License:** The dataset is distributed under a CC0 "No Rights Reserved" license, encouraging academic and commercial use while acknowledging the original source of the Wikipedia content. ## Dataset Sources - **Repository:** The dataset is currently not publicly available but can be accessed upon request for academic or research purposes. - **Paper :** Details about the dataset generation process and initial benchmarks are described in the working paper: "AI-Generated Summaries of Chinese Wikipedia Articles: A New Dataset for NLP Research", Zhang Xin et al., Beihang University. ## Uses - **Direct Use:** Suitable for training and evaluating models on text summarization, language understanding, and other NLP tasks that require condensed representations of source content. - **Out-of-Scope Use:** The dataset is not intended for identifying or generating personalized content, as it does not contain user-specific information or preferences. ## Dataset Structure The dataset consists of JSON files where each entry has the following format: { 'original': 'string', 'truncated_text': 'string' with 2000 length, 'semantic_content': 'string' } ## Dataset Creation - **Curation Rationale:** The dataset was curated to fill the gap in the availability of summarized text for NLP research. By leveraging AI tools to generate summaries, we aim to provide a resource that can help in improving summarization algorithms and understanding condensed Chinese text. ## Source Data - **Data Collection and Processing:** Summaries were generated using a proprietary AI-based summarization tool. The input data was sourced from a selection of Chinese Wikipedia articles spanning various topics and domains. - **Annotations:** No manual annotations were provided as the dataset was generated through an automated process without human intervention. ## Personal and Sensitive Information As the dataset is generated from publicly available Wikipedia articles and contains only factual summaries, it does not include any personal or sensitive information. ## Bias, Risks, and Limitations As the dataset is derived from Wikipedia, it may inherit the biases present in the articles. These include but are not limited to cultural, topical, and linguistic biases. Users should exercise caution and perform additional bias analysis when using this dataset in their models. ## Recommendations We recommend users of this dataset to acknowledge the potential biases and evaluate the models trained on this dataset across a variety of metrics to ensure fairness and robustness. ## Citation Please cite the following paper if you use this dataset in your research:\n Zhang, X. et al. (Year). AI-Generated Summaries of Chinese Wikipedia Articles: A New Dataset for NLP Research. Beihang University. ## Dataset Card Authors The dataset card was authored by Zhang Xin and the AI Research Group at Beihang University. ## Dataset Card Contact For further inquiries or access requests, please contact Zhang Xin at [email protected] .
xinzhang/wikipedia_summary
[ "task_categories:summarization", "size_categories:1M<n<10M", "language:en", "license:mit", "region:us" ]
2023-12-28T01:35:54+00:00
{"language": ["en"], "license": "mit", "size_categories": ["1M<n<10M"], "task_categories": ["summarization"], "pretty_name": "wikiprompt"}
2023-12-28T01:57:54+00:00
[]
[ "en" ]
TAGS #task_categories-summarization #size_categories-1M<n<10M #language-English #license-mit #region-us
# Dataset Description - Curated by: Zhang Xin from Beihang University (BUAA). The dataset was created using an AI tool to generate summaries of Wikipedia articles, aiming to support NLP research and applications, especially in the context of language processing. - Funded by: The creation of this dataset was internally supported by Beihang University as a part of academic research initiatives. - Shared by: Zhang Xin from the Department of Computer Science, Beihang University. - Language(s) (NLP): English - License: The dataset is distributed under a CC0 "No Rights Reserved" license, encouraging academic and commercial use while acknowledging the original source of the Wikipedia content. ## Dataset Sources - Repository: The dataset is currently not publicly available but can be accessed upon request for academic or research purposes. - Paper : Details about the dataset generation process and initial benchmarks are described in the working paper: "AI-Generated Summaries of Chinese Wikipedia Articles: A New Dataset for NLP Research", Zhang Xin et al., Beihang University. ## Uses - Direct Use: Suitable for training and evaluating models on text summarization, language understanding, and other NLP tasks that require condensed representations of source content. - Out-of-Scope Use: The dataset is not intended for identifying or generating personalized content, as it does not contain user-specific information or preferences. ## Dataset Structure The dataset consists of JSON files where each entry has the following format: { 'original': 'string', 'truncated_text': 'string' with 2000 length, 'semantic_content': 'string' } ## Dataset Creation - Curation Rationale: The dataset was curated to fill the gap in the availability of summarized text for NLP research. By leveraging AI tools to generate summaries, we aim to provide a resource that can help in improving summarization algorithms and understanding condensed Chinese text. ## Source Data - Data Collection and Processing: Summaries were generated using a proprietary AI-based summarization tool. The input data was sourced from a selection of Chinese Wikipedia articles spanning various topics and domains. - Annotations: No manual annotations were provided as the dataset was generated through an automated process without human intervention. ## Personal and Sensitive Information As the dataset is generated from publicly available Wikipedia articles and contains only factual summaries, it does not include any personal or sensitive information. ## Bias, Risks, and Limitations As the dataset is derived from Wikipedia, it may inherit the biases present in the articles. These include but are not limited to cultural, topical, and linguistic biases. Users should exercise caution and perform additional bias analysis when using this dataset in their models. ## Recommendations We recommend users of this dataset to acknowledge the potential biases and evaluate the models trained on this dataset across a variety of metrics to ensure fairness and robustness. Please cite the following paper if you use this dataset in your research:\n Zhang, X. et al. (Year). AI-Generated Summaries of Chinese Wikipedia Articles: A New Dataset for NLP Research. Beihang University. ## Dataset Card Authors The dataset card was authored by Zhang Xin and the AI Research Group at Beihang University. ## Dataset Card Contact For further inquiries or access requests, please contact Zhang Xin at zxin0423@URL .
[ "# Dataset Description\n\n- Curated by: Zhang Xin from Beihang University (BUAA). The dataset was created using an AI tool to generate summaries of Wikipedia articles, aiming to support NLP research and applications, especially in the context of language processing.\n\n- Funded by: The creation of this dataset was internally supported by Beihang University as a part of academic research initiatives.\n\n- Shared by: Zhang Xin from the Department of Computer Science, Beihang University.\n\n- Language(s) (NLP): English\n\n- License: The dataset is distributed under a CC0 \"No Rights Reserved\" license, encouraging academic and commercial use while acknowledging the original source of the Wikipedia content.", "## Dataset Sources \n\n- Repository: The dataset is currently not publicly available but can be accessed upon request for academic or research purposes.\n\n- Paper : Details about the dataset generation process and initial benchmarks are described in the working paper: \"AI-Generated Summaries of Chinese Wikipedia Articles: A New Dataset for NLP Research\", Zhang Xin et al., Beihang University.", "## Uses\n\n- Direct Use: Suitable for training and evaluating models on text summarization, language understanding, and other NLP tasks that require condensed representations of source content.\n\n- Out-of-Scope Use: The dataset is not intended for identifying or generating personalized content, as it does not contain user-specific information or preferences.", "## Dataset Structure\n\nThe dataset consists of JSON files where each entry has the following format:\n\n\n{\n\n\n 'original': 'string',\n\n \n 'truncated_text': 'string' with 2000 length,\n\n \n 'semantic_content': 'string'\n\n \n}", "## Dataset Creation\n\n- Curation Rationale: The dataset was curated to fill the gap in the availability of summarized text for NLP research. By leveraging AI tools to generate summaries, we aim to provide a resource that can help in improving summarization algorithms and understanding condensed Chinese text.", "## Source Data\n\n- Data Collection and Processing: Summaries were generated using a proprietary AI-based summarization tool. The input data was sourced from a selection of Chinese Wikipedia articles spanning various topics and domains.\n\n- Annotations:\n\nNo manual annotations were provided as the dataset was generated through an automated process without human intervention.", "## Personal and Sensitive Information\n\nAs the dataset is generated from publicly available Wikipedia articles and contains only factual summaries, it does not include any personal or sensitive information.", "## Bias, Risks, and Limitations\n\nAs the dataset is derived from Wikipedia, it may inherit the biases present in the articles. These include but are not limited to cultural, topical, and linguistic biases. Users should exercise caution and perform additional bias analysis when using this dataset in their models.", "## Recommendations\n\nWe recommend users of this dataset to acknowledge the potential biases and evaluate the models trained on this dataset across a variety of metrics to ensure fairness and robustness.\n\nPlease cite the following paper if you use this dataset in your research:\\n\nZhang, X. et al. (Year). AI-Generated Summaries of Chinese Wikipedia Articles: A New Dataset for NLP Research. Beihang University.", "## Dataset Card Authors \n\nThe dataset card was authored by Zhang Xin and the AI Research Group at Beihang University.", "## Dataset Card Contact\n\nFor further inquiries or access requests, please contact Zhang Xin at zxin0423@URL ." ]
[ "TAGS\n#task_categories-summarization #size_categories-1M<n<10M #language-English #license-mit #region-us \n", "# Dataset Description\n\n- Curated by: Zhang Xin from Beihang University (BUAA). The dataset was created using an AI tool to generate summaries of Wikipedia articles, aiming to support NLP research and applications, especially in the context of language processing.\n\n- Funded by: The creation of this dataset was internally supported by Beihang University as a part of academic research initiatives.\n\n- Shared by: Zhang Xin from the Department of Computer Science, Beihang University.\n\n- Language(s) (NLP): English\n\n- License: The dataset is distributed under a CC0 \"No Rights Reserved\" license, encouraging academic and commercial use while acknowledging the original source of the Wikipedia content.", "## Dataset Sources \n\n- Repository: The dataset is currently not publicly available but can be accessed upon request for academic or research purposes.\n\n- Paper : Details about the dataset generation process and initial benchmarks are described in the working paper: \"AI-Generated Summaries of Chinese Wikipedia Articles: A New Dataset for NLP Research\", Zhang Xin et al., Beihang University.", "## Uses\n\n- Direct Use: Suitable for training and evaluating models on text summarization, language understanding, and other NLP tasks that require condensed representations of source content.\n\n- Out-of-Scope Use: The dataset is not intended for identifying or generating personalized content, as it does not contain user-specific information or preferences.", "## Dataset Structure\n\nThe dataset consists of JSON files where each entry has the following format:\n\n\n{\n\n\n 'original': 'string',\n\n \n 'truncated_text': 'string' with 2000 length,\n\n \n 'semantic_content': 'string'\n\n \n}", "## Dataset Creation\n\n- Curation Rationale: The dataset was curated to fill the gap in the availability of summarized text for NLP research. By leveraging AI tools to generate summaries, we aim to provide a resource that can help in improving summarization algorithms and understanding condensed Chinese text.", "## Source Data\n\n- Data Collection and Processing: Summaries were generated using a proprietary AI-based summarization tool. The input data was sourced from a selection of Chinese Wikipedia articles spanning various topics and domains.\n\n- Annotations:\n\nNo manual annotations were provided as the dataset was generated through an automated process without human intervention.", "## Personal and Sensitive Information\n\nAs the dataset is generated from publicly available Wikipedia articles and contains only factual summaries, it does not include any personal or sensitive information.", "## Bias, Risks, and Limitations\n\nAs the dataset is derived from Wikipedia, it may inherit the biases present in the articles. These include but are not limited to cultural, topical, and linguistic biases. Users should exercise caution and perform additional bias analysis when using this dataset in their models.", "## Recommendations\n\nWe recommend users of this dataset to acknowledge the potential biases and evaluate the models trained on this dataset across a variety of metrics to ensure fairness and robustness.\n\nPlease cite the following paper if you use this dataset in your research:\\n\nZhang, X. et al. (Year). AI-Generated Summaries of Chinese Wikipedia Articles: A New Dataset for NLP Research. Beihang University.", "## Dataset Card Authors \n\nThe dataset card was authored by Zhang Xin and the AI Research Group at Beihang University.", "## Dataset Card Contact\n\nFor further inquiries or access requests, please contact Zhang Xin at zxin0423@URL ." ]
[ 37, 154, 88, 80, 58, 72, 76, 39, 74, 99, 27, 29 ]
[ "passage: TAGS\n#task_categories-summarization #size_categories-1M<n<10M #language-English #license-mit #region-us \n# Dataset Description\n\n- Curated by: Zhang Xin from Beihang University (BUAA). The dataset was created using an AI tool to generate summaries of Wikipedia articles, aiming to support NLP research and applications, especially in the context of language processing.\n\n- Funded by: The creation of this dataset was internally supported by Beihang University as a part of academic research initiatives.\n\n- Shared by: Zhang Xin from the Department of Computer Science, Beihang University.\n\n- Language(s) (NLP): English\n\n- License: The dataset is distributed under a CC0 \"No Rights Reserved\" license, encouraging academic and commercial use while acknowledging the original source of the Wikipedia content.## Dataset Sources \n\n- Repository: The dataset is currently not publicly available but can be accessed upon request for academic or research purposes.\n\n- Paper : Details about the dataset generation process and initial benchmarks are described in the working paper: \"AI-Generated Summaries of Chinese Wikipedia Articles: A New Dataset for NLP Research\", Zhang Xin et al., Beihang University.## Uses\n\n- Direct Use: Suitable for training and evaluating models on text summarization, language understanding, and other NLP tasks that require condensed representations of source content.\n\n- Out-of-Scope Use: The dataset is not intended for identifying or generating personalized content, as it does not contain user-specific information or preferences.## Dataset Structure\n\nThe dataset consists of JSON files where each entry has the following format:\n\n\n{\n\n\n 'original': 'string',\n\n \n 'truncated_text': 'string' with 2000 length,\n\n \n 'semantic_content': 'string'\n\n \n}## Dataset Creation\n\n- Curation Rationale: The dataset was curated to fill the gap in the availability of summarized text for NLP research. By leveraging AI tools to generate summaries, we aim to provide a resource that can help in improving summarization algorithms and understanding condensed Chinese text." ]
820a0a83a5a78bd2b07689b769343dbdd059f4ca
# Dataset Card for "Subject-Driven-image-Generation" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
TIGER-Lab/Subject-Driven-image-Generation
[ "region:us" ]
2023-12-28T02:10:43+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "uid", "dtype": "int64"}, {"name": "subject_id", "dtype": "int64"}, {"name": "subject_image_0", "dtype": "image"}, {"name": "subject_image_1", "dtype": "image"}, {"name": "subject_image_2", "dtype": "image"}], "splits": [{"name": "eval", "num_bytes": 144522527.0, "num_examples": 150}, {"name": "full", "num_bytes": 210463659.0, "num_examples": 215}], "download_size": 57827973, "dataset_size": 354986186.0}}
2023-12-28T04:02:11+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Subject-Driven-image-Generation" More Information needed
[ "# Dataset Card for \"Subject-Driven-image-Generation\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Subject-Driven-image-Generation\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"Subject-Driven-image-Generation\"\n\nMore Information needed" ]
260f99e6c0b9214527cde30e9b306b69c3802586
## Symbol-LLM: Towards Foundational Symbol-centric Interface for Large Language Models Paper Link: https://arxiv.org/abs/2311.09278 Project Page: https://xufangzhi.github.io/symbol-llm-page/ ## 🔥 News - 🔥🔥🔥 We have made a part of the Symbolic Collection public, including ~88K samples for training (10% of the whole collection). The whole collection is expected to release upon acceptance of the paper. - 🔥🔥🔥 The model weights (7B / 13B) are released ! ## Note This work is still under review. ## Citation If you find it helpful, please kindly cite the paper. ``` @article{xu2023symbol, title={Symbol-LLM: Towards Foundational Symbol-centric Interface For Large Language Models}, author={Xu, Fangzhi and Wu, Zhiyong and Sun, Qiushi and Ren, Siyu and Yuan, Fei and Yuan, Shuai and Lin, Qika and Qiao, Yu and Liu, Jun}, journal={arXiv preprint arXiv:2311.09278}, year={2023} } ```
Symbol-LLM/Symbolic_Collection
[ "task_categories:text-generation", "size_categories:100K<n<1M", "license:apache-2.0", "arxiv:2311.09278", "region:us" ]
2023-12-28T02:45:11+00:00
{"license": "apache-2.0", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"]}
2023-12-28T11:30:03+00:00
[ "2311.09278" ]
[]
TAGS #task_categories-text-generation #size_categories-100K<n<1M #license-apache-2.0 #arxiv-2311.09278 #region-us
## Symbol-LLM: Towards Foundational Symbol-centric Interface for Large Language Models Paper Link: URL Project Page: URL ## News - We have made a part of the Symbolic Collection public, including ~88K samples for training (10% of the whole collection). The whole collection is expected to release upon acceptance of the paper. - The model weights (7B / 13B) are released ! ## Note This work is still under review. If you find it helpful, please kindly cite the paper.
[ "## Symbol-LLM: Towards Foundational Symbol-centric Interface for Large Language Models\n\nPaper Link: URL\n\nProject Page: URL", "## News\n\n- We have made a part of the Symbolic Collection public, including ~88K samples for training (10% of the whole collection). The whole collection is expected to release upon acceptance of the paper.\n\n- The model weights (7B / 13B) are released !", "## Note\n\nThis work is still under review.\n\nIf you find it helpful, please kindly cite the paper." ]
[ "TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #license-apache-2.0 #arxiv-2311.09278 #region-us \n", "## Symbol-LLM: Towards Foundational Symbol-centric Interface for Large Language Models\n\nPaper Link: URL\n\nProject Page: URL", "## News\n\n- We have made a part of the Symbolic Collection public, including ~88K samples for training (10% of the whole collection). The whole collection is expected to release upon acceptance of the paper.\n\n- The model weights (7B / 13B) are released !", "## Note\n\nThis work is still under review.\n\nIf you find it helpful, please kindly cite the paper." ]
[ 46, 29, 58, 22 ]
[ "passage: TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #license-apache-2.0 #arxiv-2311.09278 #region-us \n## Symbol-LLM: Towards Foundational Symbol-centric Interface for Large Language Models\n\nPaper Link: URL\n\nProject Page: URL## News\n\n- We have made a part of the Symbolic Collection public, including ~88K samples for training (10% of the whole collection). The whole collection is expected to release upon acceptance of the paper.\n\n- The model weights (7B / 13B) are released !## Note\n\nThis work is still under review.\n\nIf you find it helpful, please kindly cite the paper." ]
8a9f31b5c30de79eb8c78d8aa3818028e6292f82
# MS MARCO Distillation Scores for Translate-Distill This repository contains [MS MARCO](https://microsoft.github.io/msmarco/) training query-passage scores produced by MonoT5 reranker [`unicamp-dl/mt5-13b-mmarco-100k`](https://huggingface.co/unicamp-dl/mt5-13b-mmarco-100k) and [`castorini/monot5-3b-msmarco-10k`](https://huggingface.co/castorini/monot5-3b-msmarco-10k). Each training query is associated with the top-50 passages retrieved by the [ColBERTv2](https://arxiv.org/abs/2112.01488) model. Files are gzip compressed and with the naming scheme of `{teacher}-monot5-{msmarco, mmarco}-{qlang}{plang}.jsonl.gz`, which indicates the teacher reranker that inferenced using `qlang` queries and `plang` passages from MS MARCO. For languages other than English (eng), we use the translated text provided by mmarco and [neuMarco](https://ir-datasets.com/neumarco.html). We additionally provide the Persian translation of the MS MARCO training queries since they were not included in either neuMARCO or mMARCO. You can find the tsv files containing the translation in `msmarco.train.query.fas.tsv.gz`. ## Usage We recommand downloading the files to incorporate with the training script in the [PLAID-X](https://github.com/hltcoe/ColBERT-X/tree/plaid-x) codebase. ## Citation and Bibtex Info Please cite the following paper if you use the scores. ```bibtext @inproceedings{translate-distill, author = {Eugene Yang and Dawn Lawrie and James Mayfield and Douglas W. Oard and Scott Miller}, title = {Translate-Distill: Learning Cross-Language \ Dense Retrieval by Translation and Distillation}, booktitle = {Proceedings of the 46th European Conference on Information Retrieval (ECIR)}, year = {2024}, url = {https://arxiv.org/abs/2401.04810} } ```
hltcoe/tdist-msmarco-scores
[ "license:mit", "arxiv:2112.01488", "arxiv:2401.04810", "region:us" ]
2023-12-28T03:45:07+00:00
{"license": "mit"}
2024-01-11T02:24:48+00:00
[ "2112.01488", "2401.04810" ]
[]
TAGS #license-mit #arxiv-2112.01488 #arxiv-2401.04810 #region-us
# MS MARCO Distillation Scores for Translate-Distill This repository contains MS MARCO training query-passage scores produced by MonoT5 reranker 'unicamp-dl/mt5-13b-mmarco-100k' and 'castorini/monot5-3b-msmarco-10k'. Each training query is associated with the top-50 passages retrieved by the ColBERTv2 model. Files are gzip compressed and with the naming scheme of '{teacher}-monot5-{msmarco, mmarco}-{qlang}{plang}.URL', which indicates the teacher reranker that inferenced using 'qlang' queries and 'plang' passages from MS MARCO. For languages other than English (eng), we use the translated text provided by mmarco and neuMarco. We additionally provide the Persian translation of the MS MARCO training queries since they were not included in either neuMARCO or mMARCO. You can find the tsv files containing the translation in 'URL'. ## Usage We recommand downloading the files to incorporate with the training script in the PLAID-X codebase. and Bibtex Info Please cite the following paper if you use the scores.
[ "# MS MARCO Distillation Scores for Translate-Distill\n\nThis repository contains MS MARCO training \nquery-passage scores produced by MonoT5 reranker \n'unicamp-dl/mt5-13b-mmarco-100k' and\n'castorini/monot5-3b-msmarco-10k'. \n\nEach training query is associated with the top-50 passages retrieved by the ColBERTv2 model. \n\nFiles are gzip compressed and with the naming scheme of '{teacher}-monot5-{msmarco, mmarco}-{qlang}{plang}.URL', \nwhich indicates the teacher reranker that inferenced using 'qlang' queries and 'plang' passages from MS MARCO. \nFor languages other than English (eng), we use the translated text provided by mmarco and neuMarco. \n\nWe additionally provide the Persian translation of the MS MARCO training queries since they were not included in either neuMARCO or mMARCO. \nYou can find the tsv files containing the translation in 'URL'.", "## Usage\n\nWe recommand downloading the files to incorporate with the training script in the PLAID-X codebase. \n\nand Bibtex Info\n\nPlease cite the following paper if you use the scores." ]
[ "TAGS\n#license-mit #arxiv-2112.01488 #arxiv-2401.04810 #region-us \n", "# MS MARCO Distillation Scores for Translate-Distill\n\nThis repository contains MS MARCO training \nquery-passage scores produced by MonoT5 reranker \n'unicamp-dl/mt5-13b-mmarco-100k' and\n'castorini/monot5-3b-msmarco-10k'. \n\nEach training query is associated with the top-50 passages retrieved by the ColBERTv2 model. \n\nFiles are gzip compressed and with the naming scheme of '{teacher}-monot5-{msmarco, mmarco}-{qlang}{plang}.URL', \nwhich indicates the teacher reranker that inferenced using 'qlang' queries and 'plang' passages from MS MARCO. \nFor languages other than English (eng), we use the translated text provided by mmarco and neuMarco. \n\nWe additionally provide the Persian translation of the MS MARCO training queries since they were not included in either neuMARCO or mMARCO. \nYou can find the tsv files containing the translation in 'URL'.", "## Usage\n\nWe recommand downloading the files to incorporate with the training script in the PLAID-X codebase. \n\nand Bibtex Info\n\nPlease cite the following paper if you use the scores." ]
[ 29, 250, 44 ]
[ "passage: TAGS\n#license-mit #arxiv-2112.01488 #arxiv-2401.04810 #region-us \n# MS MARCO Distillation Scores for Translate-Distill\n\nThis repository contains MS MARCO training \nquery-passage scores produced by MonoT5 reranker \n'unicamp-dl/mt5-13b-mmarco-100k' and\n'castorini/monot5-3b-msmarco-10k'. \n\nEach training query is associated with the top-50 passages retrieved by the ColBERTv2 model. \n\nFiles are gzip compressed and with the naming scheme of '{teacher}-monot5-{msmarco, mmarco}-{qlang}{plang}.URL', \nwhich indicates the teacher reranker that inferenced using 'qlang' queries and 'plang' passages from MS MARCO. \nFor languages other than English (eng), we use the translated text provided by mmarco and neuMarco. \n\nWe additionally provide the Persian translation of the MS MARCO training queries since they were not included in either neuMARCO or mMARCO. \nYou can find the tsv files containing the translation in 'URL'.## Usage\n\nWe recommand downloading the files to incorporate with the training script in the PLAID-X codebase. \n\nand Bibtex Info\n\nPlease cite the following paper if you use the scores." ]
2bbadabaab458c317bba1d535a68e5c191056e3a
CogBench is the benchmark introduced in CogGPT ([GitHub](https://github.com/KwaiKEG/CogGPT)), a series of agent-related works open-sourced by [KwaiKEG](https://github.com/KwaiKEG) from [Kuaishou Technology](https://www.kuaishou.com/en). It consists of 22,000 pieces of bilingual data designed to evaluate the cognitive dynamics of LLMs. CogBench is divided into two parts based on the type of information flow: CogBench<sub>a</sub> for articles and CogBench<sub>v</sub> for short videos. The evaluation metrics, including Authenticity and Rationality, assess the ratings and reasoning of an agent, respectively. ## Overall statistics of CogBench --- | Type | #Instances | #Cognitive Questionnaires | #Profiles | #Information Flows | Avg. Length (words) | #Info Flows/Iteration | | :-------: | :-------:| :-------: | :-------: | :-------: | :-------: | :-------: | | CogBench<sub>a</sub> | 11,000 | 50 | 20 | 500 | 2,044.54 | 1 | | CogBench<sub>v</sub> | 11,000 | 50 | 20 | 5,000 | 289.60 | 10 | ## Evaluation results of different agents in CogBench --- The specific performance of different agents in CogBench is detailed in our [paper](https://arxiv.org/abs/2401.08438). - Performance of different agents in CogBench with the Authenticity metric. <table> <tr> <th style="text-align: center; font-weight: bold;" rowspan="2"> Methods </th> <td colspan="3" style="text-align: center;"> CogBench<sub>a</sub> </td> <td colspan="3" style="text-align: center;"> CogBench<sub>v</sub> </td> </tr> <tr> <th style="text-align: center;"> avg. </th> <th style="text-align: center;"> 5th </th> <th style="text-align: center;"> 10th </th> <th style="text-align: center;"> avg. </th> <th style="text-align: center;"> 5th </th> <th style="text-align: center;"> 10th </th> </tr> <tr> <td style="text-align: center; font-weight: bold;"> CoT </td> <td style="text-align: center;"> 0.182 </td> <td style="text-align: center;"> 0.192 </td> <td style="text-align: center;"> 0.091 </td> <td style="text-align: center;"> 0.153 </td> <td style="text-align: center;"> 0.302 </td> <td style="text-align: center;"> 0.131 </td> </tr> <tr> <td style="text-align: center; font-weight: bold;"> ReAct </td> <td style="text-align: center;"> 0.236 </td> <td style="text-align: center;"> 0.144 </td> <td style="text-align: center;"> 0.270 </td> <td style="text-align: center;"> 0.212 </td> <td style="text-align: center;"> 0.241 </td> <td style="text-align: center;"> 0.227 </td> </tr> <tr> <td style="text-align: center; font-weight: bold;"> Reflexion </td> <td style="text-align: center;"> 0.302 </td> <td style="text-align: center;"> 0.327 </td> <td style="text-align: center;"> 0.244 </td> <td style="text-align: center;"> 0.329 </td> <td style="text-align: center;"> 0.352 </td> <td style="text-align: center;"> 0.373 </td> </tr> <tr> <td style="text-align: center; font-weight: bold;"> CogGPT </td> <td style="text-align: center; font-weight: bold;"> 0.536 </td> <td style="text-align: center; font-weight: bold;"> 0.415 </td> <td style="text-align: center; font-weight: bold;"> 0.597 </td> <td style="text-align: center; font-weight: bold;"> 0.532 </td> <td style="text-align: center; font-weight: bold;"> 0.496 </td> <td style="text-align: center; font-weight: bold;"> 0.611 </td> </tr> </table> - Performance of different agents in CogBench with the Rationality metric. <table> <tr> <th style="text-align: center; font-weight: bold;" rowspan="2"> Methods </th> <td colspan="3" style="text-align: center;"> CogBench<sub>a</sub> </td> <td colspan="3" style="text-align: center;"> CogBench<sub>v</sub> </td> </tr> <tr> <th style="text-align: center;"> avg. </th> <th style="text-align: center;"> 5th </th> <th style="text-align: center;"> 10th </th> <th style="text-align: center;"> avg. </th> <th style="text-align: center;"> 5th </th> <th style="text-align: center;"> 10th </th> </tr> <tr> <td style="text-align: center; font-weight: bold;"> CoT </td> <td style="text-align: center;"> 2.925 </td> <td style="text-align: center;"> 2.883 </td> <td style="text-align: center;"> 3.167 </td> <td style="text-align: center;"> 3.058 </td> <td style="text-align: center;"> 3.767 </td> <td style="text-align: center;"> 3.083 </td> </tr> <tr> <td style="text-align: center; font-weight: bold;"> ReAct </td> <td style="text-align: center;"> 3.415 </td> <td style="text-align: center;"> 3.483 </td> <td style="text-align: center;"> 3.483 </td> <td style="text-align: center;"> 3.535 </td> <td style="text-align: center;"> 3.800 </td> <td style="text-align: center;"> 3.800 </td> </tr> <tr> <td style="text-align: center; font-weight: bold;"> Reflexion </td> <td style="text-align: center;"> 3.658 </td> <td style="text-align: center;"> 3.917 </td> <td style="text-align: center;"> 3.533 </td> <td style="text-align: center;"> 3.888 </td> <td style="text-align: center;"> 3.967 </td> <td style="text-align: center;"> 3.917 </td> </tr> <tr> <td style="text-align: center; font-weight: bold;"> CogGPT </td> <td style="text-align: center; font-weight: bold;"> 4.118 </td> <td style="text-align: center; font-weight: bold;"> 4.117 </td> <td style="text-align: center; font-weight: bold;"> 4.300 </td> <td style="text-align: center; font-weight: bold;"> 4.145 </td> <td style="text-align: center; font-weight: bold;"> 4.183 </td> <td style="text-align: center; font-weight: bold;"> 4.317 </td> </tr> </table> ## Data Format --- CogBench supports languages in both English and Chinese, which are stored in the `english\` and `chinese\` folders separately. **profile.json** records the 20 profiles generated for the task in our [paper](https://arxiv.org/abs/2401.08438). Each piece of data is a dictionary with key-value pairs representing the character's portrait. The overall data format is as follows: ```json { "Name": "", "Gender": "", "Age": "", "Place of Birth": "", "Occupation": "", "Height": "", "Weight": "", "Distinguishing Marks": "", "Personality": "", "Hobbies": "", "Skills": "", "Dislikes": "", "Values": "", "Religious Beliefs": "", "Interpersonal Relations": "", "Flaws": "", "External Environment": "", "Financial Status": "", "Family Background": "", "Educational Background": "", "Significant Experience": "", "Future Outlook": "" } ``` **cogbench_a.json** and **cogbench_v.json** record the overall cognitive tests across 50 topics. Each piece of data is a dictionary with the following keys: - `iteration`: an integer, indicating the number of the current iteration. - `category`: a string, indicating the category of the information flow and questionnaire. - `topic`: a string, indicating the topic of the information flow and questionnaire. - `information_flow`: a list, indicating the information flows of the current iteration. - `questionnaire`: a list, indicating the topic-related questions. - `question`: a string, indicating a specific question. The overall data format is as follows: ```json { "iteration": 0, "category": "", "topic": "", "information_flow": [], "questionnaire": [ { "question": "" }, ... ] } ``` **eval_cogbench_a.json** and **eval_cogbench_v.json** record the annotation results in our [paper](https://arxiv.org/abs/2401.08438). Similar to the data format of **cogbench_a.json** and **cogbench_v.json**, we include additional key-value pairs to record the experimental results. Each piece of data is a dictionary extended with the following keys: - `profile`: a dictionary, indicating the profile chosen from `profile.json` for role-playing in the task. - `answer`: a dictionary, indicating the experimental results of annotators and different agents. - `human_rating`: an integer, indicating a score provided by an annotator to showcase their attitude towards the question based on the profile and previous information flows. The attitude employs a five-point scale, ranging from `strongly disagree` to `strongly agree`, with a `neutral` midpoint. - `CoT`: a string, indicating the name of the agent, which is extended to `ReAct`, `Reflexion`, `CogGPT` in our experiments. - `rating`: an integer, indicating a score provided by the agent to showcase its attitude towards the question based on the profile and previous information flows. The attitude employs an identical five-point scale. - `reason`: a string, indicating a reason provided by the agent to explain the reasoning for its rating. - `rationality`: an integer, indicating a rationality score provided by an annotator to the reason. The overall data format is as follows: ```json { "iteration": 0, "category": "", "topic": "", "information_flow": [], "profile": {}, "questionnaire": [ { "question": "", "answer": { "human_rating": 2, "CoT": { "rating": 2, "reason": "", "rationality": 1 }, ... }, ... ] } ``` Here's a proofread version of the README.md section: ## Evaluation To obtain the overall evaluation scores of `CoT`, including Authenticity and Rationality, execute the following command using the experimental results of `CoT` in CogBench<sub>v</sub> as an example. ```bash python evaluation.py --file_path english/eval_cogbench_v.json --method CoT --authenticity --rationality ``` Here is the explanation of the parameters: - `--file_path`: The file path of the annotation results. You should follow the data format of **eval_cogbench_a.json** or **eval_cogbench_v.json** to run the script correctly. - `--method`: The name of the agent for evaluation. - `--authenticity`: Whether to calculate the Authenticity metric. - `--rationality`: Whether to calculate the Rationality metric. The final evaluation scores will appear as follows: ```bash ======= CoT Authenticity ======= Average authenticity: 0.15277666156947955 5th iteration authenticity: 0.3023255813953488 10th iteration authenticity: 0.13135593220338992 ======= CoT Rationality ======= Average rationality: 3.058333333333333 5th iteration rationality: 3.7666666666666666 10th iteration rationality: 3.0833333333333335 ```
kwaikeg/CogBench
[ "task_categories:text-generation", "size_categories:1K<n<10K", "language:zh", "language:en", "license:cc-by-nc-sa-4.0", "arxiv:2401.08438", "region:us" ]
2023-12-28T03:52:55+00:00
{"language": ["zh", "en"], "license": "cc-by-nc-sa-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-generation"]}
2024-01-26T03:05:28+00:00
[ "2401.08438" ]
[ "zh", "en" ]
TAGS #task_categories-text-generation #size_categories-1K<n<10K #language-Chinese #language-English #license-cc-by-nc-sa-4.0 #arxiv-2401.08438 #region-us
CogBench is the benchmark introduced in CogGPT (GitHub), a series of agent-related works open-sourced by KwaiKEG from Kuaishou Technology. It consists of 22,000 pieces of bilingual data designed to evaluate the cognitive dynamics of LLMs. CogBench is divided into two parts based on the type of information flow: CogBencha for articles and CogBenchv for short videos. The evaluation metrics, including Authenticity and Rationality, assess the ratings and reasoning of an agent, respectively. Overall statistics of CogBench ------------------------------ --- Evaluation results of different agents in CogBench -------------------------------------------------- --- The specific performance of different agents in CogBench is detailed in our paper. * Performance of different agents in CogBench with the Authenticity metric. * Performance of different agents in CogBench with the Rationality metric. Data Format ----------- --- CogBench supports languages in both English and Chinese, which are stored in the 'english' and 'chinese' folders separately. URL records the 20 profiles generated for the task in our paper. Each piece of data is a dictionary with key-value pairs representing the character's portrait. The overall data format is as follows: cogbench\_a.json and cogbench\_v.json record the overall cognitive tests across 50 topics. Each piece of data is a dictionary with the following keys: * 'iteration': an integer, indicating the number of the current iteration. * 'category': a string, indicating the category of the information flow and questionnaire. * 'topic': a string, indicating the topic of the information flow and questionnaire. * 'information\_flow': a list, indicating the information flows of the current iteration. * 'questionnaire': a list, indicating the topic-related questions. * 'question': a string, indicating a specific question. The overall data format is as follows: eval\_cogbench\_a.json and eval\_cogbench\_v.json record the annotation results in our paper. Similar to the data format of cogbench\_a.json and cogbench\_v.json, we include additional key-value pairs to record the experimental results. Each piece of data is a dictionary extended with the following keys: * 'profile': a dictionary, indicating the profile chosen from 'URL' for role-playing in the task. * 'answer': a dictionary, indicating the experimental results of annotators and different agents. * 'human\_rating': an integer, indicating a score provided by an annotator to showcase their attitude towards the question based on the profile and previous information flows. The attitude employs a five-point scale, ranging from 'strongly disagree' to 'strongly agree', with a 'neutral' midpoint. * 'CoT': a string, indicating the name of the agent, which is extended to 'ReAct', 'Reflexion', 'CogGPT' in our experiments. * 'rating': an integer, indicating a score provided by the agent to showcase its attitude towards the question based on the profile and previous information flows. The attitude employs an identical five-point scale. * 'reason': a string, indicating a reason provided by the agent to explain the reasoning for its rating. * 'rationality': an integer, indicating a rationality score provided by an annotator to the reason. The overall data format is as follows: Here's a proofread version of the URL section: Evaluation ---------- To obtain the overall evaluation scores of 'CoT', including Authenticity and Rationality, execute the following command using the experimental results of 'CoT' in CogBenchv as an example. Here is the explanation of the parameters: * '--file\_path': The file path of the annotation results. You should follow the data format of eval\_cogbench\_a.json or eval\_cogbench\_v.json to run the script correctly. * '--method': The name of the agent for evaluation. * '--authenticity': Whether to calculate the Authenticity metric. * '--rationality': Whether to calculate the Rationality metric. The final evaluation scores will appear as follows:
[]
[ "TAGS\n#task_categories-text-generation #size_categories-1K<n<10K #language-Chinese #language-English #license-cc-by-nc-sa-4.0 #arxiv-2401.08438 #region-us \n" ]
[ 60 ]
[ "passage: TAGS\n#task_categories-text-generation #size_categories-1K<n<10K #language-Chinese #language-English #license-cc-by-nc-sa-4.0 #arxiv-2401.08438 #region-us \n" ]
d4c9b443ed6e8563bfb1126c4e8dc8a3be9327a6
# Dataset Card for "Subject_Driven_Image_Editing" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
TIGER-Lab/Subject_Driven_Image_Editing
[ "region:us" ]
2023-12-28T04:06:44+00:00
{"dataset_info": {"features": [{"name": "uid", "dtype": "int64"}, {"name": "image", "dtype": "image"}, {"name": "subject", "dtype": "string"}, {"name": "subject_image_0", "dtype": "image"}, {"name": "subject_image_1", "dtype": "image"}, {"name": "subject_image_2", "dtype": "image"}], "splits": [{"name": "eval", "num_bytes": 154799894.0, "num_examples": 154}, {"name": "extra", "num_bytes": 66230300.0, "num_examples": 66}], "download_size": 49158277, "dataset_size": 221030194.0}}
2023-12-28T04:19:39+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Subject_Driven_Image_Editing" More Information needed
[ "# Dataset Card for \"Subject_Driven_Image_Editing\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Subject_Driven_Image_Editing\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"Subject_Driven_Image_Editing\"\n\nMore Information needed" ]
fb9c3c239c5d61bec25157d3bbdcec16fddf04f0
# Dataset Card for "Subject_Driven_Image_Generation" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
TIGER-Lab/Subject_Driven_Image_Generation
[ "region:us" ]
2023-12-28T04:07:17+00:00
{"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "subject", "dtype": "string"}, {"name": "uid", "dtype": "int64"}, {"name": "subject_id", "dtype": "int64"}, {"name": "subject_image_0", "dtype": "image"}, {"name": "subject_image_1", "dtype": "image"}, {"name": "subject_image_2", "dtype": "image"}], "splits": [{"name": "full", "num_bytes": 210463659.0, "num_examples": 215}, {"name": "eval", "num_bytes": 144522527.0, "num_examples": 150}], "download_size": 57827973, "dataset_size": 354986186.0}}
2023-12-28T04:08:30+00:00
[]
[]
TAGS #region-us
# Dataset Card for "Subject_Driven_Image_Generation" More Information needed
[ "# Dataset Card for \"Subject_Driven_Image_Generation\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"Subject_Driven_Image_Generation\"\n\nMore Information needed" ]
[ 6, 20 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"Subject_Driven_Image_Generation\"\n\nMore Information needed" ]
ae81ca424d6a776b20cd471278478ba661de0d23
# vogue-runway-top15-512px [Vogue Runway](https://www.vogue.com/fashion-shows) - 15 fashion houses - 1679 collections - 87,547 images Fashion Houses: Alexander McQueen, Armani, Balenciaga, Calvin Klein, Chanel, Dior, Fendi, Gucci, Hermes, Louis Vuitton, Prada, Ralph Lauren, Saint Laurent, Valentino, Versace. Images are maximum height 512 pixels. ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/648a824a8ca6cf9857d1349c/kUFTy7kt_WAVbFymF-3uH.jpeg) ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/648a824a8ca6cf9857d1349c/HBTN1FxNwsLJipqldcTgb.jpeg) ![image/jpeg](https://cdn-uploads.huggingface.co/production/uploads/648a824a8ca6cf9857d1349c/Sg--x22QxiePHi1DY35gP.jpeg)
tonyassi/vogue-runway-top15-512px
[ "region:us" ]
2023-12-28T05:46:29+00:00
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "alexander mcqueen,fall 1996 ready to wear", "1": "alexander mcqueen,fall 1997 ready to wear", "2": "alexander mcqueen,fall 1998 ready to wear", "3": "alexander mcqueen,fall 1999 ready to wear", "4": "alexander mcqueen,fall 2000 ready to wear", "5": "alexander mcqueen,fall 2001 ready to wear", "6": "alexander mcqueen,fall 2002 ready to wear", "7": "alexander mcqueen,fall 2003 ready to wear", "8": "alexander mcqueen,fall 2004 ready to wear", "9": "alexander mcqueen,fall 2005 menswear", "10": "alexander mcqueen,fall 2005 ready to wear", "11": "alexander mcqueen,fall 2006 menswear", "12": "alexander mcqueen,fall 2006 ready to wear", "13": "alexander mcqueen,fall 2007 menswear", "14": "alexander mcqueen,fall 2007 ready to wear", "15": "alexander mcqueen,fall 2008 menswear", "16": "alexander mcqueen,fall 2008 ready to wear", "17": "alexander mcqueen,fall 2009 ready to wear", "18": "alexander mcqueen,fall 2010 menswear", "19": "alexander mcqueen,fall 2010 ready to wear", "20": "alexander mcqueen,fall 2011 menswear", "21": "alexander mcqueen,fall 2011 ready to wear", "22": "alexander mcqueen,fall 2012 menswear", "23": "alexander mcqueen,fall 2012 ready to wear", "24": "alexander mcqueen,fall 2013 menswear", "25": "alexander mcqueen,fall 2013 ready to wear", "26": "alexander mcqueen,fall 2014 menswear", "27": "alexander mcqueen,fall 2014 ready to wear", "28": "alexander mcqueen,fall 2015 menswear", "29": "alexander mcqueen,fall 2015 ready to wear", "30": "alexander mcqueen,fall 2016 menswear", "31": "alexander mcqueen,fall 2016 ready to wear", "32": "alexander mcqueen,fall 2017 menswear", "33": "alexander mcqueen,fall 2017 ready to wear", "34": "alexander mcqueen,fall 2018 menswear", "35": "alexander mcqueen,fall 2018 ready to wear", "36": "alexander mcqueen,fall 2019 menswear", "37": "alexander mcqueen,fall 2019 ready to wear", "38": "alexander mcqueen,fall 2020 menswear", "39": "alexander mcqueen,fall 2020 ready to wear", "40": "alexander mcqueen,fall 2021 menswear", "41": "alexander mcqueen,fall 2021 ready to wear", "42": "alexander mcqueen,fall 2022 menswear", "43": "alexander mcqueen,fall 2022 ready to wear", "44": "alexander mcqueen,fall 2023 menswear", "45": "alexander mcqueen,fall 2023 ready to wear", "46": "alexander mcqueen,pre fall 2009", "47": "alexander mcqueen,pre fall 2011", "48": "alexander mcqueen,pre fall 2012", "49": "alexander mcqueen,pre fall 2013", "50": "alexander mcqueen,pre fall 2014", "51": "alexander mcqueen,pre fall 2015", "52": "alexander mcqueen,pre fall 2016", "53": "alexander mcqueen,pre fall 2017", "54": "alexander mcqueen,pre fall 2018", "55": "alexander mcqueen,pre fall 2019", "56": "alexander mcqueen,pre fall 2020", "57": "alexander mcqueen,pre fall 2021", "58": "alexander mcqueen,pre fall 2021 menswear", "59": "alexander mcqueen,pre fall 2022", "60": "alexander mcqueen,pre fall 2023", "61": "alexander mcqueen,resort 2009", "62": "alexander mcqueen,resort 2010", "63": "alexander mcqueen,resort 2011", "64": "alexander mcqueen,resort 2012", "65": "alexander mcqueen,resort 2013", "66": "alexander mcqueen,resort 2014", "67": "alexander mcqueen,resort 2015", "68": "alexander mcqueen,resort 2016", "69": "alexander mcqueen,resort 2017", "70": "alexander mcqueen,resort 2018", "71": "alexander mcqueen,resort 2019", "72": "alexander mcqueen,resort 2020", "73": "alexander mcqueen,resort 2021", "74": "alexander mcqueen,resort 2022", "75": "alexander mcqueen,resort 2023", "76": "alexander mcqueen,spring 1995 ready to wear", "77": "alexander mcqueen,spring 1996 ready to wear", "78": "alexander mcqueen,spring 1997 ready to wear", "79": "alexander mcqueen,spring 1998 ready to wear", "80": "alexander mcqueen,spring 1999 ready to wear", "81": "alexander mcqueen,spring 2000 ready to wear", "82": "alexander mcqueen,spring 2001 ready to wear", "83": "alexander mcqueen,spring 2002 ready to wear", "84": "alexander mcqueen,spring 2003 ready to wear", "85": "alexander mcqueen,spring 2004 ready to wear", "86": "alexander mcqueen,spring 2005 menswear", "87": "alexander mcqueen,spring 2005 ready to wear", "88": "alexander mcqueen,spring 2006 menswear", "89": "alexander mcqueen,spring 2006 ready to wear", "90": "alexander mcqueen,spring 2007 menswear", "91": "alexander mcqueen,spring 2007 ready to wear", "92": "alexander mcqueen,spring 2008 menswear", "93": "alexander mcqueen,spring 2008 ready to wear", "94": "alexander mcqueen,spring 2009 menswear", "95": "alexander mcqueen,spring 2009 ready to wear", "96": "alexander mcqueen,spring 2010 menswear", "97": "alexander mcqueen,spring 2010 ready to wear", "98": "alexander mcqueen,spring 2011 menswear", "99": "alexander mcqueen,spring 2011 ready to wear", "100": "alexander mcqueen,spring 2012 menswear", "101": "alexander mcqueen,spring 2012 ready to wear", "102": "alexander mcqueen,spring 2013 menswear", "103": "alexander mcqueen,spring 2013 ready to wear", "104": "alexander mcqueen,spring 2014 menswear", "105": "alexander mcqueen,spring 2014 ready to wear", "106": "alexander mcqueen,spring 2015 menswear", "107": "alexander mcqueen,spring 2015 ready to wear", "108": "alexander mcqueen,spring 2016 menswear", "109": "alexander mcqueen,spring 2016 ready to wear", "110": "alexander mcqueen,spring 2017 menswear", "111": "alexander mcqueen,spring 2017 ready to wear", "112": "alexander mcqueen,spring 2018 menswear", "113": "alexander mcqueen,spring 2018 ready to wear", "114": "alexander mcqueen,spring 2019 menswear", "115": "alexander mcqueen,spring 2019 ready to wear", "116": "alexander mcqueen,spring 2020 menswear", "117": "alexander mcqueen,spring 2020 ready to wear", "118": "alexander mcqueen,spring 2021 menswear", "119": "alexander mcqueen,spring 2021 ready to wear", "120": "alexander mcqueen,spring 2022 menswear", "121": "alexander mcqueen,spring 2022 ready to wear", "122": "alexander mcqueen,spring 2023 menswear", "123": "alexander mcqueen,spring 2023 ready to wear", "124": "alexander mcqueen,spring 2024 menswear", "125": "alexander mcqueen,spring 2024 ready to wear", "126": "armani prive,fall 2005 couture", "127": "armani prive,fall 2006 couture", "128": "armani prive,fall 2007 couture", "129": "armani prive,fall 2008 couture", "130": "armani prive,fall 2009 couture", "131": "armani prive,fall 2010 couture", "132": "armani prive,fall 2011 couture", "133": "armani prive,fall 2012 couture", "134": "armani prive,fall 2013 couture", "135": "armani prive,fall 2014 couture", "136": "armani prive,fall 2015 couture", "137": "armani prive,fall 2016 couture", "138": "armani prive,fall 2017 couture", "139": "armani prive,fall 2018 couture", "140": "armani prive,fall 2019 couture", "141": "armani prive,fall 2021 couture", "142": "armani prive,fall 2022 couture", "143": "armani prive,fall 2023 couture", "144": "armani prive,spring 2005 couture", "145": "armani prive,spring 2006 couture", "146": "armani prive,spring 2007 couture", "147": "armani prive,spring 2008 couture", "148": "armani prive,spring 2009 couture", "149": "armani prive,spring 2010 couture", "150": "armani prive,spring 2011 couture", "151": "armani prive,spring 2012 couture", "152": "armani prive,spring 2013 couture", "153": "armani prive,spring 2014 couture", "154": "armani prive,spring 2015 couture", "155": "armani prive,spring 2016 couture", "156": "armani prive,spring 2017 couture", "157": "armani prive,spring 2018 couture", "158": "armani prive,spring 2019 couture", "159": "armani prive,spring 2020 couture", "160": "armani prive,spring 2021 couture", "161": "armani prive,spring 2023 couture", "162": "balenciaga,fall 2000 ready to wear", "163": "balenciaga,fall 2001 ready to wear", "164": "balenciaga,fall 2002 ready to wear", "165": "balenciaga,fall 2003 ready to wear", "166": "balenciaga,fall 2004 ready to wear", "167": "balenciaga,fall 2005 ready to wear", "168": "balenciaga,fall 2006 ready to wear", "169": "balenciaga,fall 2007 menswear", "170": "balenciaga,fall 2007 ready to wear", "171": "balenciaga,fall 2008 ready to wear", "172": "balenciaga,fall 2009 ready to wear", "173": "balenciaga,fall 2010 ready to wear", "174": "balenciaga,fall 2011 menswear", "175": "balenciaga,fall 2011 ready to wear", "176": "balenciaga,fall 2012 menswear", "177": "balenciaga,fall 2012 ready to wear", "178": "balenciaga,fall 2013 menswear", "179": "balenciaga,fall 2013 ready to wear", "180": "balenciaga,fall 2014 menswear", "181": "balenciaga,fall 2014 ready to wear", "182": "balenciaga,fall 2015 menswear", "183": "balenciaga,fall 2015 ready to wear", "184": "balenciaga,fall 2016 ready to wear", "185": "balenciaga,fall 2017 menswear", "186": "balenciaga,fall 2017 ready to wear", "187": "balenciaga,fall 2018 ready to wear", "188": "balenciaga,fall 2019 menswear", "189": "balenciaga,fall 2019 ready to wear", "190": "balenciaga,fall 2020 menswear", "191": "balenciaga,fall 2020 ready to wear", "192": "balenciaga,fall 2021 couture", "193": "balenciaga,fall 2021 menswear", "194": "balenciaga,fall 2021 ready to wear", "195": "balenciaga,fall 2022 couture", "196": "balenciaga,fall 2022 ready to wear", "197": "balenciaga,fall 2023 couture", "198": "balenciaga,fall 2023 ready to wear", "199": "balenciaga,pre fall 2008", "200": "balenciaga,pre fall 2009", "201": "balenciaga,pre fall 2010", "202": "balenciaga,pre fall 2011", "203": "balenciaga,pre fall 2012", "204": "balenciaga,pre fall 2013", "205": "balenciaga,pre fall 2014", "206": "balenciaga,pre fall 2015", "207": "balenciaga,pre fall 2016", "208": "balenciaga,pre fall 2017", "209": "balenciaga,pre fall 2018", "210": "balenciaga,pre fall 2019", "211": "balenciaga,pre fall 2020", "212": "balenciaga,pre fall 2021", "213": "balenciaga,pre fall 2022", "214": "balenciaga,pre fall 2023", "215": "balenciaga,pre fall 2024", "216": "balenciaga,resort 2008", "217": "balenciaga,resort 2009", "218": "balenciaga,resort 2010", "219": "balenciaga,resort 2011", "220": "balenciaga,resort 2012", "221": "balenciaga,resort 2013", "222": "balenciaga,resort 2014", "223": "balenciaga,resort 2015", "224": "balenciaga,resort 2016", "225": "balenciaga,resort 2017", "226": "balenciaga,resort 2018", "227": "balenciaga,resort 2019", "228": "balenciaga,resort 2020", "229": "balenciaga,resort 2021", "230": "balenciaga,resort 2022", "231": "balenciaga,resort 2023", "232": "balenciaga,resort 2024", "233": "balenciaga,spring 1998 ready to wear", "234": "balenciaga,spring 2000 ready to wear", "235": "balenciaga,spring 2001 ready to wear", "236": "balenciaga,spring 2002 ready to wear", "237": "balenciaga,spring 2003 ready to wear", "238": "balenciaga,spring 2004 ready to wear", "239": "balenciaga,spring 2005 ready to wear", "240": "balenciaga,spring 2006 ready to wear", "241": "balenciaga,spring 2007 menswear", "242": "balenciaga,spring 2007 ready to wear", "243": "balenciaga,spring 2008 menswear", "244": "balenciaga,spring 2008 ready to wear", "245": "balenciaga,spring 2009 ready to wear", "246": "balenciaga,spring 2010 ready to wear", "247": "balenciaga,spring 2011 menswear", "248": "balenciaga,spring 2011 ready to wear", "249": "balenciaga,spring 2012 menswear", "250": "balenciaga,spring 2012 ready to wear", "251": "balenciaga,spring 2013 menswear", "252": "balenciaga,spring 2013 ready to wear", "253": "balenciaga,spring 2014 menswear", "254": "balenciaga,spring 2014 ready to wear", "255": "balenciaga,spring 2015 menswear", "256": "balenciaga,spring 2015 ready to wear", "257": "balenciaga,spring 2016 menswear", "258": "balenciaga,spring 2016 ready to wear", "259": "balenciaga,spring 2017 menswear", "260": "balenciaga,spring 2017 ready to wear", "261": "balenciaga,spring 2018 menswear", "262": "balenciaga,spring 2018 ready to wear", "263": "balenciaga,spring 2019 ready to wear", "264": "balenciaga,spring 2020 menswear", "265": "balenciaga,spring 2020 ready to wear", "266": "balenciaga,spring 2021 menswear", "267": "balenciaga,spring 2021 ready to wear", "268": "balenciaga,spring 2022 ready to wear", "269": "balenciaga,spring 2023 ready to wear", "270": "balenciaga,spring 2024 ready to wear", "271": "calvin klein collection,fall 1995 ready to wear", "272": "calvin klein collection,fall 1996 ready to wear", "273": "calvin klein collection,fall 1997 ready to wear", "274": "calvin klein collection,fall 1998 ready to wear", "275": "calvin klein collection,fall 1999 ready to wear", "276": "calvin klein collection,fall 2000 ready to wear", "277": "calvin klein collection,fall 2001 ready to wear", "278": "calvin klein collection,fall 2002 ready to wear", "279": "calvin klein collection,fall 2003 ready to wear", "280": "calvin klein collection,fall 2004 ready to wear", "281": "calvin klein collection,fall 2005 menswear", "282": "calvin klein collection,fall 2005 ready to wear", "283": "calvin klein collection,fall 2006 menswear", "284": "calvin klein collection,fall 2006 ready to wear", "285": "calvin klein collection,fall 2007 menswear", "286": "calvin klein collection,fall 2007 ready to wear", "287": "calvin klein collection,fall 2008 menswear", "288": "calvin klein collection,fall 2008 ready to wear", "289": "calvin klein collection,fall 2009 ready to wear", "290": "calvin klein collection,fall 2010 menswear", "291": "calvin klein collection,fall 2010 ready to wear", "292": "calvin klein collection,fall 2011 menswear", "293": "calvin klein collection,fall 2011 ready to wear", "294": "calvin klein collection,fall 2012 menswear", "295": "calvin klein collection,fall 2012 ready to wear", "296": "calvin klein collection,fall 2013 menswear", "297": "calvin klein collection,fall 2013 ready to wear", "298": "calvin klein collection,fall 2014 menswear", "299": "calvin klein collection,fall 2014 ready to wear", "300": "calvin klein collection,fall 2015 menswear", "301": "calvin klein collection,fall 2015 ready to wear", "302": "calvin klein collection,fall 2016 menswear", "303": "calvin klein collection,fall 2016 ready to wear", "304": "calvin klein collection,pre fall 2008", "305": "calvin klein collection,pre fall 2009", "306": "calvin klein collection,pre fall 2010", "307": "calvin klein collection,pre fall 2011", "308": "calvin klein collection,pre fall 2012", "309": "calvin klein collection,pre fall 2013", "310": "calvin klein collection,pre fall 2014", "311": "calvin klein collection,pre fall 2015", "312": "calvin klein collection,pre fall 2016", "313": "calvin klein collection,resort 2008", "314": "calvin klein collection,resort 2009", "315": "calvin klein collection,resort 2010", "316": "calvin klein collection,resort 2011", "317": "calvin klein collection,resort 2012", "318": "calvin klein collection,resort 2013", "319": "calvin klein collection,resort 2014", "320": "calvin klein collection,resort 2015", "321": "calvin klein collection,resort 2016", "322": "calvin klein collection,resort 2017", "323": "calvin klein collection,spring 1994 ready to wear", "324": "calvin klein collection,spring 1995 ready to wear", "325": "calvin klein collection,spring 1996 ready to wear", "326": "calvin klein collection,spring 1997 ready to wear", "327": "calvin klein collection,spring 1998 ready to wear", "328": "calvin klein collection,spring 1999 ready to wear", "329": "calvin klein collection,spring 2000 ready to wear", "330": "calvin klein collection,spring 2001 ready to wear", "331": "calvin klein collection,spring 2002 ready to wear", "332": "calvin klein collection,spring 2003 ready to wear", "333": "calvin klein collection,spring 2004 ready to wear", "334": "calvin klein collection,spring 2005 menswear", "335": "calvin klein collection,spring 2005 ready to wear", "336": "calvin klein collection,spring 2006 menswear", "337": "calvin klein collection,spring 2006 ready to wear", "338": "calvin klein collection,spring 2007 menswear", "339": "calvin klein collection,spring 2007 ready to wear", "340": "calvin klein collection,spring 2008 menswear", "341": "calvin klein collection,spring 2008 ready to wear", "342": "calvin klein collection,spring 2009 menswear", "343": "calvin klein collection,spring 2009 ready to wear", "344": "calvin klein collection,spring 2010 menswear", "345": "calvin klein collection,spring 2010 ready to wear", "346": "calvin klein collection,spring 2011 menswear", "347": "calvin klein collection,spring 2011 ready to wear", "348": "calvin klein collection,spring 2012 menswear", "349": "calvin klein collection,spring 2012 ready to wear", "350": "calvin klein collection,spring 2013 menswear", "351": "calvin klein collection,spring 2013 ready to wear", "352": "calvin klein collection,spring 2014 menswear", "353": "calvin klein collection,spring 2014 ready to wear", "354": "calvin klein collection,spring 2015 menswear", "355": "calvin klein collection,spring 2015 ready to wear", "356": "calvin klein collection,spring 2016 menswear", "357": "calvin klein collection,spring 2016 ready to wear", "358": "calvin klein collection,spring 2017 menswear", "359": "calvin klein,fall 2017 menswear", "360": "calvin klein,fall 2017 ready to wear", "361": "calvin klein,fall 2018 menswear", "362": "calvin klein,fall 2018 ready to wear", "363": "calvin klein,pre fall 2019", "364": "calvin klein,resort 2019", "365": "calvin klein,spring 2018 menswear", "366": "calvin klein,spring 2018 ready to wear", "367": "calvin klein,spring 2019 menswear", "368": "calvin klein,spring 2019 ready to wear", "369": "chanel,fall 1991 ready to wear", "370": "chanel,fall 1994 ready to wear", "371": "chanel,fall 1995 couture", "372": "chanel,fall 1996 couture", "373": "chanel,fall 1997 couture", "374": "chanel,fall 1999 couture", "375": "chanel,fall 2000 couture", "376": "chanel,fall 2000 ready to wear", "377": "chanel,fall 2002 couture", "378": "chanel,fall 2003 ready to wear", "379": "chanel,fall 2004 couture", "380": "chanel,fall 2004 ready to wear", "381": "chanel,fall 2005 couture", "382": "chanel,fall 2005 ready to wear", "383": "chanel,fall 2006 couture", "384": "chanel,fall 2006 ready to wear", "385": "chanel,fall 2007 couture", "386": "chanel,fall 2007 ready to wear", "387": "chanel,fall 2008 couture", "388": "chanel,fall 2008 ready to wear", "389": "chanel,fall 2009 couture", "390": "chanel,fall 2009 ready to wear", "391": "chanel,fall 2010 couture", "392": "chanel,fall 2010 ready to wear", "393": "chanel,fall 2011 couture", "394": "chanel,fall 2011 ready to wear", "395": "chanel,fall 2012 couture", "396": "chanel,fall 2012 ready to wear", "397": "chanel,fall 2013 couture", "398": "chanel,fall 2013 ready to wear", "399": "chanel,fall 2014 couture", "400": "chanel,fall 2014 ready to wear", "401": "chanel,fall 2015 couture", "402": "chanel,fall 2015 ready to wear", "403": "chanel,fall 2016 couture", "404": "chanel,fall 2016 ready to wear", "405": "chanel,fall 2017 couture", "406": "chanel,fall 2017 ready to wear", "407": "chanel,fall 2018 couture", "408": "chanel,fall 2018 ready to wear", "409": "chanel,fall 2019 couture", "410": "chanel,fall 2019 ready to wear", "411": "chanel,fall 2020 couture", "412": "chanel,fall 2020 ready to wear", "413": "chanel,fall 2021 couture", "414": "chanel,fall 2021 ready to wear", "415": "chanel,fall 2022 couture", "416": "chanel,fall 2022 ready to wear", "417": "chanel,fall 2023 couture", "418": "chanel,fall 2023 ready to wear", "419": "chanel,pre fall 2008", "420": "chanel,pre fall 2009", "421": "chanel,pre fall 2010", "422": "chanel,pre fall 2011", "423": "chanel,pre fall 2012", "424": "chanel,pre fall 2013", "425": "chanel,pre fall 2014", "426": "chanel,pre fall 2015", "427": "chanel,pre fall 2016", "428": "chanel,pre fall 2017", "429": "chanel,pre fall 2018", "430": "chanel,pre fall 2019", "431": "chanel,pre fall 2020", "432": "chanel,pre fall 2021", "433": "chanel,pre fall 2022", "434": "chanel,pre fall 2023", "435": "chanel,pre fall 2024", "436": "chanel,resort 2007", "437": "chanel,resort 2008", "438": "chanel,resort 2009", "439": "chanel,resort 2010", "440": "chanel,resort 2011", "441": "chanel,resort 2012", "442": "chanel,resort 2013", "443": "chanel,resort 2014", "444": "chanel,resort 2015", "445": "chanel,resort 2016", "446": "chanel,resort 2017", "447": "chanel,resort 2018", "448": "chanel,resort 2019", "449": "chanel,resort 2020", "450": "chanel,resort 2021", "451": "chanel,resort 2022", "452": "chanel,resort 2023", "453": "chanel,resort 2024", "454": "chanel,spring 1992 ready to wear", "455": "chanel,spring 1993 couture", "456": "chanel,spring 1993 ready to wear", "457": "chanel,spring 1994 ready to wear", "458": "chanel,spring 1995 ready to wear", "459": "chanel,spring 1996 ready to wear", "460": "chanel,spring 1997 couture", "461": "chanel,spring 1999 couture", "462": "chanel,spring 2001 couture", "463": "chanel,spring 2002 couture", "464": "chanel,spring 2002 ready to wear", "465": "chanel,spring 2003 couture", "466": "chanel,spring 2004 couture", "467": "chanel,spring 2004 ready to wear", "468": "chanel,spring 2005 couture", "469": "chanel,spring 2005 ready to wear", "470": "chanel,spring 2006 couture", "471": "chanel,spring 2006 ready to wear", "472": "chanel,spring 2007 couture", "473": "chanel,spring 2007 ready to wear", "474": "chanel,spring 2008 couture", "475": "chanel,spring 2008 ready to wear", "476": "chanel,spring 2009 couture", "477": "chanel,spring 2009 ready to wear", "478": "chanel,spring 2010 couture", "479": "chanel,spring 2010 ready to wear", "480": "chanel,spring 2011 couture", "481": "chanel,spring 2011 ready to wear", "482": "chanel,spring 2012 couture", "483": "chanel,spring 2012 ready to wear", "484": "chanel,spring 2013 couture", "485": "chanel,spring 2013 ready to wear", "486": "chanel,spring 2014 couture", "487": "chanel,spring 2014 ready to wear", "488": "chanel,spring 2015 couture", "489": "chanel,spring 2015 ready to wear", "490": "chanel,spring 2016 couture", "491": "chanel,spring 2016 ready to wear", "492": "chanel,spring 2017 couture", "493": "chanel,spring 2017 ready to wear", "494": "chanel,spring 2018 couture", "495": "chanel,spring 2018 ready to wear", "496": "chanel,spring 2019 couture", "497": "chanel,spring 2019 ready to wear", "498": "chanel,spring 2020 couture", "499": "chanel,spring 2020 ready to wear", "500": "chanel,spring 2021 couture", "501": "chanel,spring 2021 ready to wear", "502": "chanel,spring 2022 couture", "503": "chanel,spring 2022 ready to wear", "504": "chanel,spring 2023 couture", "505": "chanel,spring 2023 ready to wear", "506": "chanel,spring 2024 ready to wear", "507": "christian dior,fall 1999 couture", "508": "christian dior,fall 2000 couture", "509": "christian dior,fall 2000 ready to wear", "510": "christian dior,fall 2001 couture", "511": "christian dior,fall 2001 ready to wear", "512": "christian dior,fall 2002 couture", "513": "christian dior,fall 2002 ready to wear", "514": "christian dior,fall 2003 couture", "515": "christian dior,fall 2003 ready to wear", "516": "christian dior,fall 2004 couture", "517": "christian dior,fall 2004 ready to wear", "518": "christian dior,fall 2005 couture", "519": "christian dior,fall 2005 ready to wear", "520": "christian dior,fall 2006 couture", "521": "christian dior,fall 2006 ready to wear", "522": "christian dior,fall 2007 couture", "523": "christian dior,fall 2007 ready to wear", "524": "christian dior,fall 2008 couture", "525": "christian dior,fall 2008 ready to wear", "526": "christian dior,fall 2009 couture", "527": "christian dior,fall 2009 ready to wear", "528": "christian dior,fall 2010 couture", "529": "christian dior,fall 2010 menswear", "530": "christian dior,fall 2010 ready to wear", "531": "christian dior,fall 2011 couture", "532": "christian dior,fall 2011 ready to wear", "533": "christian dior,fall 2012 couture", "534": "christian dior,fall 2012 ready to wear", "535": "christian dior,fall 2013 couture", "536": "christian dior,fall 2013 ready to wear", "537": "christian dior,fall 2014 couture", "538": "christian dior,fall 2014 ready to wear", "539": "christian dior,fall 2015 couture", "540": "christian dior,fall 2015 ready to wear", "541": "christian dior,fall 2016 couture", "542": "christian dior,fall 2016 ready to wear", "543": "christian dior,fall 2017 couture", "544": "christian dior,fall 2017 ready to wear", "545": "christian dior,fall 2018 couture", "546": "christian dior,fall 2018 ready to wear", "547": "christian dior,fall 2019 couture", "548": "christian dior,fall 2019 ready to wear", "549": "christian dior,fall 2020 couture", "550": "christian dior,fall 2021 couture", "551": "christian dior,fall 2021 ready to wear", "552": "christian dior,fall 2022 couture", "553": "christian dior,fall 2022 ready to wear", "554": "christian dior,fall 2023 couture", "555": "christian dior,fall 2023 ready to wear", "556": "christian dior,pre fall 2009", "557": "christian dior,pre fall 2010", "558": "christian dior,pre fall 2011", "559": "christian dior,pre fall 2012", "560": "christian dior,pre fall 2013", "561": "christian dior,pre fall 2014", "562": "christian dior,pre fall 2015", "563": "christian dior,pre fall 2016", "564": "christian dior,pre fall 2017", "565": "christian dior,pre fall 2018", "566": "christian dior,pre fall 2019", "567": "christian dior,pre fall 2020", "568": "christian dior,pre fall 2021", "569": "christian dior,pre fall 2022", "570": "christian dior,pre fall 2023", "571": "christian dior,resort 2007", "572": "christian dior,resort 2008", "573": "christian dior,resort 2009", "574": "christian dior,resort 2010", "575": "christian dior,resort 2011", "576": "christian dior,resort 2012", "577": "christian dior,resort 2013", "578": "christian dior,resort 2014", "579": "christian dior,resort 2015", "580": "christian dior,resort 2016", "581": "christian dior,resort 2017", "582": "christian dior,resort 2018", "583": "christian dior,resort 2019", "584": "christian dior,resort 2020", "585": "christian dior,resort 2021", "586": "christian dior,resort 2022", "587": "christian dior,resort 2023", "588": "christian dior,resort 2024", "589": "christian dior,spring 1999 couture", "590": "christian dior,spring 2000 ready to wear", "591": "christian dior,spring 2001 couture", "592": "christian dior,spring 2001 ready to wear", "593": "christian dior,spring 2002 couture", "594": "christian dior,spring 2002 ready to wear", "595": "christian dior,spring 2003 couture", "596": "christian dior,spring 2003 ready to wear", "597": "christian dior,spring 2004 couture", "598": "christian dior,spring 2004 ready to wear", "599": "christian dior,spring 2005 couture", "600": "christian dior,spring 2005 ready to wear", "601": "christian dior,spring 2006 couture", "602": "christian dior,spring 2006 ready to wear", "603": "christian dior,spring 2007 couture", "604": "christian dior,spring 2007 ready to wear", "605": "christian dior,spring 2008 couture", "606": "christian dior,spring 2008 ready to wear", "607": "christian dior,spring 2009 couture", "608": "christian dior,spring 2009 ready to wear", "609": "christian dior,spring 2010 couture", "610": "christian dior,spring 2010 menswear", "611": "christian dior,spring 2010 ready to wear", "612": "christian dior,spring 2011 couture", "613": "christian dior,spring 2011 ready to wear", "614": "christian dior,spring 2012 couture", "615": "christian dior,spring 2012 ready to wear", "616": "christian dior,spring 2013 couture", "617": "christian dior,spring 2013 ready to wear", "618": "christian dior,spring 2014 couture", "619": "christian dior,spring 2014 ready to wear", "620": "christian dior,spring 2015 couture", "621": "christian dior,spring 2015 ready to wear", "622": "christian dior,spring 2016 couture", "623": "christian dior,spring 2016 ready to wear", "624": "christian dior,spring 2017 couture", "625": "christian dior,spring 2017 ready to wear", "626": "christian dior,spring 2018 couture", "627": "christian dior,spring 2018 ready to wear", "628": "christian dior,spring 2019 couture", "629": "christian dior,spring 2019 ready to wear", "630": "christian dior,spring 2020 couture", "631": "christian dior,spring 2020 ready to wear", "632": "christian dior,spring 2021 couture", "633": "christian dior,spring 2021 ready to wear", "634": "christian dior,spring 2022 couture", "635": "christian dior,spring 2022 ready to wear", "636": "christian dior,spring 2023 couture", "637": "christian dior,spring 2023 ready to wear", "638": "christian dior,spring 2024 ready to wear", "639": "fendi,fall 1999 ready to wear", "640": "fendi,fall 2000 ready to wear", "641": "fendi,fall 2001 ready to wear", "642": "fendi,fall 2002 ready to wear", "643": "fendi,fall 2003 ready to wear", "644": "fendi,fall 2004 ready to wear", "645": "fendi,fall 2005 ready to wear", "646": "fendi,fall 2006 ready to wear", "647": "fendi,fall 2007 menswear", "648": "fendi,fall 2007 ready to wear", "649": "fendi,fall 2008 menswear", "650": "fendi,fall 2008 ready to wear", "651": "fendi,fall 2009 ready to wear", "652": "fendi,fall 2010 ready to wear", "653": "fendi,fall 2011 ready to wear", "654": "fendi,fall 2012 menswear", "655": "fendi,fall 2012 ready to wear", "656": "fendi,fall 2013 menswear", "657": "fendi,fall 2013 ready to wear", "658": "fendi,fall 2014 menswear", "659": "fendi,fall 2014 ready to wear", "660": "fendi,fall 2015 couture", "661": "fendi,fall 2015 menswear", "662": "fendi,fall 2015 ready to wear", "663": "fendi,fall 2016 couture", "664": "fendi,fall 2016 menswear", "665": "fendi,fall 2016 ready to wear", "666": "fendi,fall 2017 couture", "667": "fendi,fall 2017 menswear", "668": "fendi,fall 2017 ready to wear", "669": "fendi,fall 2018 couture", "670": "fendi,fall 2018 menswear", "671": "fendi,fall 2018 ready to wear", "672": "fendi,fall 2019 couture", "673": "fendi,fall 2019 menswear", "674": "fendi,fall 2019 ready to wear", "675": "fendi,fall 2020 menswear", "676": "fendi,fall 2020 ready to wear", "677": "fendi,fall 2021 couture", "678": "fendi,fall 2021 menswear", "679": "fendi,fall 2021 ready to wear", "680": "fendi,fall 2022 couture", "681": "fendi,fall 2022 menswear", "682": "fendi,fall 2022 ready to wear", "683": "fendi,fall 2023 couture", "684": "fendi,fall 2023 menswear", "685": "fendi,fall 2023 ready to wear", "686": "fendi,pre fall 2011", "687": "fendi,pre fall 2012", "688": "fendi,pre fall 2013", "689": "fendi,pre fall 2014", "690": "fendi,pre fall 2015", "691": "fendi,pre fall 2016", "692": "fendi,pre fall 2017", "693": "fendi,pre fall 2018", "694": "fendi,pre fall 2019", "695": "fendi,pre fall 2020", "696": "fendi,pre fall 2022", "697": "fendi,resort 2008", "698": "fendi,resort 2009", "699": "fendi,resort 2012", "700": "fendi,resort 2013", "701": "fendi,resort 2014", "702": "fendi,resort 2015", "703": "fendi,resort 2016", "704": "fendi,resort 2017", "705": "fendi,resort 2018", "706": "fendi,resort 2019", "707": "fendi,resort 2020", "708": "fendi,resort 2022", "709": "fendi,resort 2023", "710": "fendi,resort 2024", "711": "fendi,spring 1999 ready to wear", "712": "fendi,spring 2000 ready to wear", "713": "fendi,spring 2001 ready to wear", "714": "fendi,spring 2002 ready to wear", "715": "fendi,spring 2003 ready to wear", "716": "fendi,spring 2004 ready to wear", "717": "fendi,spring 2005 ready to wear", "718": "fendi,spring 2006 ready to wear", "719": "fendi,spring 2007 ready to wear", "720": "fendi,spring 2008 menswear", "721": "fendi,spring 2008 ready to wear", "722": "fendi,spring 2009 menswear", "723": "fendi,spring 2009 ready to wear", "724": "fendi,spring 2010 ready to wear", "725": "fendi,spring 2011 ready to wear", "726": "fendi,spring 2012 ready to wear", "727": "fendi,spring 2013 menswear", "728": "fendi,spring 2013 ready to wear", "729": "fendi,spring 2014 menswear", "730": "fendi,spring 2014 ready to wear", "731": "fendi,spring 2015 menswear", "732": "fendi,spring 2015 ready to wear", "733": "fendi,spring 2016 menswear", "734": "fendi,spring 2016 ready to wear", "735": "fendi,spring 2017 menswear", "736": "fendi,spring 2017 ready to wear", "737": "fendi,spring 2018 menswear", "738": "fendi,spring 2018 ready to wear", "739": "fendi,spring 2019 menswear", "740": "fendi,spring 2019 ready to wear", "741": "fendi,spring 2020 menswear", "742": "fendi,spring 2020 ready to wear", "743": "fendi,spring 2021 couture", "744": "fendi,spring 2021 menswear", "745": "fendi,spring 2021 ready to wear", "746": "fendi,spring 2022 couture", "747": "fendi,spring 2022 menswear", "748": "fendi,spring 2022 ready to wear", "749": "fendi,spring 2023 couture", "750": "fendi,spring 2023 menswear", "751": "fendi,spring 2023 ready to wear", "752": "fendi,spring 2024 menswear", "753": "fendi,spring 2024 ready to wear", "754": "gucci,fall 1995 ready to wear", "755": "gucci,fall 1996 ready to wear", "756": "gucci,fall 2000 ready to wear", "757": "gucci,fall 2001 ready to wear", "758": "gucci,fall 2002 ready to wear", "759": "gucci,fall 2003 ready to wear", "760": "gucci,fall 2004 ready to wear", "761": "gucci,fall 2005 menswear", "762": "gucci,fall 2005 ready to wear", "763": "gucci,fall 2006 menswear", "764": "gucci,fall 2006 ready to wear", "765": "gucci,fall 2007 menswear", "766": "gucci,fall 2007 ready to wear", "767": "gucci,fall 2008 menswear", "768": "gucci,fall 2008 ready to wear", "769": "gucci,fall 2009 ready to wear", "770": "gucci,fall 2010 menswear", "771": "gucci,fall 2010 ready to wear", "772": "gucci,fall 2011 menswear", "773": "gucci,fall 2011 ready to wear", "774": "gucci,fall 2012 menswear", "775": "gucci,fall 2012 ready to wear", "776": "gucci,fall 2013 menswear", "777": "gucci,fall 2013 ready to wear", "778": "gucci,fall 2014 menswear", "779": "gucci,fall 2014 ready to wear", "780": "gucci,fall 2015 menswear", "781": "gucci,fall 2015 ready to wear", "782": "gucci,fall 2016 menswear", "783": "gucci,fall 2016 ready to wear", "784": "gucci,fall 2017 menswear", "785": "gucci,fall 2017 ready to wear", "786": "gucci,fall 2018 menswear", "787": "gucci,fall 2018 ready to wear", "788": "gucci,fall 2019 menswear", "789": "gucci,fall 2019 ready to wear", "790": "gucci,fall 2020 menswear", "791": "gucci,fall 2020 ready to wear", "792": "gucci,fall 2022 ready to wear", "793": "gucci,fall 2023 menswear", "794": "gucci,fall 2023 ready to wear", "795": "gucci,pre fall 2011", "796": "gucci,pre fall 2012", "797": "gucci,pre fall 2013", "798": "gucci,pre fall 2014", "799": "gucci,pre fall 2015", "800": "gucci,pre fall 2016", "801": "gucci,pre fall 2017", "802": "gucci,pre fall 2018", "803": "gucci,pre fall 2019", "804": "gucci,pre fall 2020", "805": "gucci,pre fall 2020 menswear", "806": "gucci,pre fall 2021", "807": "gucci,pre fall 2021 menswear", "808": "gucci,pre fall 2022", "809": "gucci,resort 2007", "810": "gucci,resort 2008", "811": "gucci,resort 2009", "812": "gucci,resort 2010", "813": "gucci,resort 2011", "814": "gucci,resort 2012", "815": "gucci,resort 2013", "816": "gucci,resort 2014", "817": "gucci,resort 2015", "818": "gucci,resort 2016", "819": "gucci,resort 2017", "820": "gucci,resort 2018", "821": "gucci,resort 2019", "822": "gucci,resort 2020", "823": "gucci,resort 2021", "824": "gucci,resort 2023", "825": "gucci,resort 2024", "826": "gucci,spring 1999 ready to wear", "827": "gucci,spring 2000 ready to wear", "828": "gucci,spring 2001 ready to wear", "829": "gucci,spring 2002 ready to wear", "830": "gucci,spring 2003 ready to wear", "831": "gucci,spring 2004 ready to wear", "832": "gucci,spring 2005 menswear", "833": "gucci,spring 2005 ready to wear", "834": "gucci,spring 2006 menswear", "835": "gucci,spring 2006 ready to wear", "836": "gucci,spring 2007 menswear", "837": "gucci,spring 2007 ready to wear", "838": "gucci,spring 2008 menswear", "839": "gucci,spring 2008 ready to wear", "840": "gucci,spring 2009 menswear", "841": "gucci,spring 2009 ready to wear", "842": "gucci,spring 2010 menswear", "843": "gucci,spring 2010 ready to wear", "844": "gucci,spring 2011 menswear", "845": "gucci,spring 2011 ready to wear", "846": "gucci,spring 2012 menswear", "847": "gucci,spring 2012 ready to wear", "848": "gucci,spring 2013 menswear", "849": "gucci,spring 2013 ready to wear", "850": "gucci,spring 2014 menswear", "851": "gucci,spring 2014 ready to wear", "852": "gucci,spring 2015 menswear", "853": "gucci,spring 2015 ready to wear", "854": "gucci,spring 2016 menswear", "855": "gucci,spring 2016 ready to wear", "856": "gucci,spring 2017 menswear", "857": "gucci,spring 2017 ready to wear", "858": "gucci,spring 2018 menswear", "859": "gucci,spring 2018 ready to wear", "860": "gucci,spring 2019 ready to wear", "861": "gucci,spring 2020 menswear", "862": "gucci,spring 2020 ready to wear", "863": "gucci,spring 2021 menswear", "864": "gucci,spring 2021 ready to wear", "865": "gucci,spring 2022 ready to wear", "866": "gucci,spring 2023 ready to wear", "867": "gucci,spring 2024 menswear", "868": "gucci,spring 2024 ready to wear", "869": "hermes,fall 1999 ready to wear", "870": "hermes,fall 2000 ready to wear", "871": "hermes,fall 2001 ready to wear", "872": "hermes,fall 2004 ready to wear", "873": "hermes,fall 2005 menswear", "874": "hermes,fall 2005 ready to wear", "875": "hermes,fall 2006 menswear", "876": "hermes,fall 2006 ready to wear", "877": "hermes,fall 2007 menswear", "878": "hermes,fall 2007 ready to wear", "879": "hermes,fall 2008 menswear", "880": "hermes,fall 2008 ready to wear", "881": "hermes,fall 2009 ready to wear", "882": "hermes,fall 2010 menswear", "883": "hermes,fall 2010 ready to wear", "884": "hermes,fall 2011 menswear", "885": "hermes,fall 2011 ready to wear", "886": "hermes,fall 2012 menswear", "887": "hermes,fall 2012 ready to wear", "888": "hermes,fall 2013 menswear", "889": "hermes,fall 2013 ready to wear", "890": "hermes,fall 2014 menswear", "891": "hermes,fall 2014 ready to wear", "892": "hermes,fall 2015 menswear", "893": "hermes,fall 2015 ready to wear", "894": "hermes,fall 2016 menswear", "895": "hermes,fall 2016 ready to wear", "896": "hermes,fall 2017 menswear", "897": "hermes,fall 2017 ready to wear", "898": "hermes,fall 2018 menswear", "899": "hermes,fall 2018 ready to wear", "900": "hermes,fall 2019 menswear", "901": "hermes,fall 2019 ready to wear", "902": "hermes,fall 2020 menswear", "903": "hermes,fall 2020 ready to wear", "904": "hermes,fall 2021 menswear", "905": "hermes,fall 2021 ready to wear", "906": "hermes,fall 2022 menswear", "907": "hermes,fall 2022 ready to wear", "908": "hermes,fall 2023 menswear", "909": "hermes,fall 2023 ready to wear", "910": "hermes,pre fall 2017", "911": "hermes,pre fall 2018", "912": "hermes,pre fall 2019", "913": "hermes,resort 2017", "914": "hermes,resort 2018", "915": "hermes,resort 2019", "916": "hermes,spring 1999 ready to wear", "917": "hermes,spring 2000 ready to wear", "918": "hermes,spring 2001 ready to wear", "919": "hermes,spring 2002 ready to wear", "920": "hermes,spring 2006 menswear", "921": "hermes,spring 2006 ready to wear", "922": "hermes,spring 2007 menswear", "923": "hermes,spring 2007 ready to wear", "924": "hermes,spring 2008 menswear", "925": "hermes,spring 2008 ready to wear", "926": "hermes,spring 2009 menswear", "927": "hermes,spring 2010 menswear", "928": "hermes,spring 2010 ready to wear", "929": "hermes,spring 2011 menswear", "930": "hermes,spring 2011 ready to wear", "931": "hermes,spring 2012 menswear", "932": "hermes,spring 2012 ready to wear", "933": "hermes,spring 2013 menswear", "934": "hermes,spring 2013 ready to wear", "935": "hermes,spring 2014 menswear", "936": "hermes,spring 2014 ready to wear", "937": "hermes,spring 2015 menswear", "938": "hermes,spring 2015 ready to wear", "939": "hermes,spring 2016 menswear", "940": "hermes,spring 2016 ready to wear", "941": "hermes,spring 2017 menswear", "942": "hermes,spring 2017 ready to wear", "943": "hermes,spring 2018 menswear", "944": "hermes,spring 2018 ready to wear", "945": "hermes,spring 2019 menswear", "946": "hermes,spring 2019 ready to wear", "947": "hermes,spring 2020 menswear", "948": "hermes,spring 2020 ready to wear", "949": "hermes,spring 2021 menswear", "950": "hermes,spring 2021 ready to wear", "951": "hermes,spring 2022 menswear", "952": "hermes,spring 2022 ready to wear", "953": "hermes,spring 2023 menswear", "954": "hermes,spring 2023 ready to wear", "955": "hermes,spring 2024 menswear", "956": "hermes,spring 2024 ready to wear", "957": "louis vuitton,fall 1998 ready to wear", "958": "louis vuitton,fall 2000 ready to wear", "959": "louis vuitton,fall 2001 ready to wear", "960": "louis vuitton,fall 2002 ready to wear", "961": "louis vuitton,fall 2003 ready to wear", "962": "louis vuitton,fall 2004 ready to wear", "963": "louis vuitton,fall 2005 menswear", "964": "louis vuitton,fall 2005 ready to wear", "965": "louis vuitton,fall 2006 menswear", "966": "louis vuitton,fall 2006 ready to wear", "967": "louis vuitton,fall 2007 menswear", "968": "louis vuitton,fall 2008 menswear", "969": "louis vuitton,fall 2008 ready to wear", "970": "louis vuitton,fall 2009 ready to wear", "971": "louis vuitton,fall 2010 menswear", "972": "louis vuitton,fall 2010 ready to wear", "973": "louis vuitton,fall 2011 menswear", "974": "louis vuitton,fall 2011 ready to wear", "975": "louis vuitton,fall 2012 menswear", "976": "louis vuitton,fall 2012 ready to wear", "977": "louis vuitton,fall 2013 menswear", "978": "louis vuitton,fall 2013 ready to wear", "979": "louis vuitton,fall 2014 menswear", "980": "louis vuitton,fall 2014 ready to wear", "981": "louis vuitton,fall 2015 menswear", "982": "louis vuitton,fall 2015 ready to wear", "983": "louis vuitton,fall 2016 menswear", "984": "louis vuitton,fall 2016 ready to wear", "985": "louis vuitton,fall 2017 menswear", "986": "louis vuitton,fall 2017 ready to wear", "987": "louis vuitton,fall 2018 menswear", "988": "louis vuitton,fall 2018 ready to wear", "989": "louis vuitton,fall 2019 menswear", "990": "louis vuitton,fall 2019 ready to wear", "991": "louis vuitton,fall 2020 menswear", "992": "louis vuitton,fall 2020 ready to wear", "993": "louis vuitton,fall 2021 menswear", "994": "louis vuitton,fall 2021 ready to wear", "995": "louis vuitton,fall 2022 menswear", "996": "louis vuitton,fall 2022 ready to wear", "997": "louis vuitton,fall 2023 menswear", "998": "louis vuitton,fall 2023 ready to wear", "999": "louis vuitton,pre fall 2008", "1000": "louis vuitton,pre fall 2009", "1001": "louis vuitton,pre fall 2010", "1002": "louis vuitton,pre fall 2011", "1003": "louis vuitton,pre fall 2012", "1004": "louis vuitton,pre fall 2013", "1005": "louis vuitton,pre fall 2014", "1006": "louis vuitton,pre fall 2015", "1007": "louis vuitton,pre fall 2016", "1008": "louis vuitton,pre fall 2017", "1009": "louis vuitton,pre fall 2018", "1010": "louis vuitton,pre fall 2019", "1011": "louis vuitton,pre fall 2020", "1012": "louis vuitton,pre fall 2020 menswear", "1013": "louis vuitton,pre fall 2021", "1014": "louis vuitton,pre fall 2021 menswear", "1015": "louis vuitton,pre fall 2022 menswear", "1016": "louis vuitton,pre fall 2023", "1017": "louis vuitton,pre fall 2023 menswear", "1018": "louis vuitton,pre fall 2024 menswear", "1019": "louis vuitton,resort 2008", "1020": "louis vuitton,resort 2009", "1021": "louis vuitton,resort 2010", "1022": "louis vuitton,resort 2011", "1023": "louis vuitton,resort 2012", "1024": "louis vuitton,resort 2013", "1025": "louis vuitton,resort 2014", "1026": "louis vuitton,resort 2015", "1027": "louis vuitton,resort 2016", "1028": "louis vuitton,resort 2017", "1029": "louis vuitton,resort 2018", "1030": "louis vuitton,resort 2019", "1031": "louis vuitton,resort 2020", "1032": "louis vuitton,resort 2021", "1033": "louis vuitton,resort 2021 menswear", "1034": "louis vuitton,resort 2022", "1035": "louis vuitton,resort 2022 menswear", "1036": "louis vuitton,resort 2023", "1037": "louis vuitton,resort 2023 menswear", "1038": "louis vuitton,resort 2024", "1039": "louis vuitton,resort 2024 menswear", "1040": "louis vuitton,spring 2000 ready to wear", "1041": "louis vuitton,spring 2001 ready to wear", "1042": "louis vuitton,spring 2002 ready to wear", "1043": "louis vuitton,spring 2003 ready to wear", "1044": "louis vuitton,spring 2004 ready to wear", "1045": "louis vuitton,spring 2005 menswear", "1046": "louis vuitton,spring 2005 ready to wear", "1047": "louis vuitton,spring 2006 menswear", "1048": "louis vuitton,spring 2006 ready to wear", "1049": "louis vuitton,spring 2007 menswear", "1050": "louis vuitton,spring 2007 ready to wear", "1051": "louis vuitton,spring 2008 menswear", "1052": "louis vuitton,spring 2008 ready to wear", "1053": "louis vuitton,spring 2009 menswear", "1054": "louis vuitton,spring 2009 ready to wear", "1055": "louis vuitton,spring 2010 menswear", "1056": "louis vuitton,spring 2010 ready to wear", "1057": "louis vuitton,spring 2011 menswear", "1058": "louis vuitton,spring 2011 ready to wear", "1059": "louis vuitton,spring 2012 menswear", "1060": "louis vuitton,spring 2012 ready to wear", "1061": "louis vuitton,spring 2013 menswear", "1062": "louis vuitton,spring 2013 ready to wear", "1063": "louis vuitton,spring 2014 menswear", "1064": "louis vuitton,spring 2014 ready to wear", "1065": "louis vuitton,spring 2015 menswear", "1066": "louis vuitton,spring 2015 ready to wear", "1067": "louis vuitton,spring 2016 menswear", "1068": "louis vuitton,spring 2016 ready to wear", "1069": "louis vuitton,spring 2017 menswear", "1070": "louis vuitton,spring 2017 ready to wear", "1071": "louis vuitton,spring 2018 menswear", "1072": "louis vuitton,spring 2018 ready to wear", "1073": "louis vuitton,spring 2019 menswear", "1074": "louis vuitton,spring 2019 ready to wear", "1075": "louis vuitton,spring 2020 menswear", "1076": "louis vuitton,spring 2020 ready to wear", "1077": "louis vuitton,spring 2021 menswear", "1078": "louis vuitton,spring 2021 ready to wear", "1079": "louis vuitton,spring 2022 menswear", "1080": "louis vuitton,spring 2023 menswear", "1081": "louis vuitton,spring 2023 ready to wear", "1082": "louis vuitton,spring 2024 menswear", "1083": "prada,fall 1996 ready to wear", "1084": "prada,fall 2000 ready to wear", "1085": "prada,fall 2001 ready to wear", "1086": "prada,fall 2002 ready to wear", "1087": "prada,fall 2003 ready to wear", "1088": "prada,fall 2004 ready to wear", "1089": "prada,fall 2005 menswear", "1090": "prada,fall 2005 ready to wear", "1091": "prada,fall 2006 menswear", "1092": "prada,fall 2006 ready to wear", "1093": "prada,fall 2007 menswear", "1094": "prada,fall 2007 ready to wear", "1095": "prada,fall 2008 menswear", "1096": "prada,fall 2008 ready to wear", "1097": "prada,fall 2009 menswear", "1098": "prada,fall 2009 ready to wear", "1099": "prada,fall 2010 menswear", "1100": "prada,fall 2010 ready to wear", "1101": "prada,fall 2011 menswear", "1102": "prada,fall 2011 ready to wear", "1103": "prada,fall 2012 menswear", "1104": "prada,fall 2012 ready to wear", "1105": "prada,fall 2013 menswear", "1106": "prada,fall 2013 ready to wear", "1107": "prada,fall 2014 menswear", "1108": "prada,fall 2014 ready to wear", "1109": "prada,fall 2015 menswear", "1110": "prada,fall 2015 ready to wear", "1111": "prada,fall 2016 menswear", "1112": "prada,fall 2016 ready to wear", "1113": "prada,fall 2017 menswear", "1114": "prada,fall 2017 ready to wear", "1115": "prada,fall 2018 menswear", "1116": "prada,fall 2018 ready to wear", "1117": "prada,fall 2019 menswear", "1118": "prada,fall 2019 ready to wear", "1119": "prada,fall 2020 menswear", "1120": "prada,fall 2020 ready to wear", "1121": "prada,fall 2021 menswear", "1122": "prada,fall 2021 ready to wear", "1123": "prada,fall 2022 menswear", "1124": "prada,fall 2022 ready to wear", "1125": "prada,fall 2023 menswear", "1126": "prada,fall 2023 ready to wear", "1127": "prada,pre fall 2009", "1128": "prada,pre fall 2010", "1129": "prada,resort 2008", "1130": "prada,resort 2009", "1131": "prada,resort 2010", "1132": "prada,resort 2011", "1133": "prada,resort 2012", "1134": "prada,resort 2013", "1135": "prada,resort 2018", "1136": "prada,resort 2019", "1137": "prada,resort 2020", "1138": "prada,spring 1992 ready to wear", "1139": "prada,spring 1993 ready to wear", "1140": "prada,spring 1994 ready to wear", "1141": "prada,spring 1995 ready to wear", "1142": "prada,spring 1996 ready to wear", "1143": "prada,spring 1997 ready to wear", "1144": "prada,spring 1998 ready to wear", "1145": "prada,spring 1999 ready to wear", "1146": "prada,spring 2000 ready to wear", "1147": "prada,spring 2001 ready to wear", "1148": "prada,spring 2002 ready to wear", "1149": "prada,spring 2003 ready to wear", "1150": "prada,spring 2004 ready to wear", "1151": "prada,spring 2005 menswear", "1152": "prada,spring 2005 ready to wear", "1153": "prada,spring 2006 menswear", "1154": "prada,spring 2006 ready to wear", "1155": "prada,spring 2007 menswear", "1156": "prada,spring 2007 ready to wear", "1157": "prada,spring 2008 menswear", "1158": "prada,spring 2008 ready to wear", "1159": "prada,spring 2009 menswear", "1160": "prada,spring 2009 ready to wear", "1161": "prada,spring 2010 ready to wear", "1162": "prada,spring 2011 menswear", "1163": "prada,spring 2011 ready to wear", "1164": "prada,spring 2012 menswear", "1165": "prada,spring 2012 ready to wear", "1166": "prada,spring 2013 menswear", "1167": "prada,spring 2013 ready to wear", "1168": "prada,spring 2014 menswear", "1169": "prada,spring 2014 ready to wear", "1170": "prada,spring 2015 menswear", "1171": "prada,spring 2015 ready to wear", "1172": "prada,spring 2016 menswear", "1173": "prada,spring 2016 ready to wear", "1174": "prada,spring 2017 menswear", "1175": "prada,spring 2017 ready to wear", "1176": "prada,spring 2018 menswear", "1177": "prada,spring 2018 ready to wear", "1178": "prada,spring 2019 menswear", "1179": "prada,spring 2019 ready to wear", "1180": "prada,spring 2020 menswear", "1181": "prada,spring 2020 ready to wear", "1182": "prada,spring 2021 menswear", "1183": "prada,spring 2021 ready to wear", "1184": "prada,spring 2022 menswear", "1185": "prada,spring 2022 ready to wear", "1186": "prada,spring 2023 menswear", "1187": "prada,spring 2023 ready to wear", "1188": "prada,spring 2024 menswear", "1189": "prada,spring 2024 ready to wear", "1190": "ralph lauren,fall 2000 ready to wear", "1191": "ralph lauren,fall 2001 ready to wear", "1192": "ralph lauren,fall 2002 ready to wear", "1193": "ralph lauren,fall 2003 ready to wear", "1194": "ralph lauren,fall 2004 ready to wear", "1195": "ralph lauren,fall 2005 menswear", "1196": "ralph lauren,fall 2005 ready to wear", "1197": "ralph lauren,fall 2006 menswear", "1198": "ralph lauren,fall 2006 ready to wear", "1199": "ralph lauren,fall 2007 menswear", "1200": "ralph lauren,fall 2007 ready to wear", "1201": "ralph lauren,fall 2008 menswear", "1202": "ralph lauren,fall 2008 ready to wear", "1203": "ralph lauren,fall 2009 ready to wear", "1204": "ralph lauren,fall 2010 menswear", "1205": "ralph lauren,fall 2010 ready to wear", "1206": "ralph lauren,fall 2011 ready to wear", "1207": "ralph lauren,fall 2012 ready to wear", "1208": "ralph lauren,fall 2013 menswear", "1209": "ralph lauren,fall 2013 ready to wear", "1210": "ralph lauren,fall 2014 menswear", "1211": "ralph lauren,fall 2014 ready to wear", "1212": "ralph lauren,fall 2015 menswear", "1213": "ralph lauren,fall 2015 ready to wear", "1214": "ralph lauren,fall 2016 menswear", "1215": "ralph lauren,fall 2016 ready to wear", "1216": "ralph lauren,fall 2017 menswear", "1217": "ralph lauren,fall 2017 ready to wear", "1218": "ralph lauren,fall 2018 menswear", "1219": "ralph lauren,fall 2018 ready to wear", "1220": "ralph lauren,fall 2019 menswear", "1221": "ralph lauren,fall 2019 ready to wear", "1222": "ralph lauren,fall 2020 menswear", "1223": "ralph lauren,fall 2020 ready to wear", "1224": "ralph lauren,fall 2021 ready to wear", "1225": "ralph lauren,fall 2022 ready to wear", "1226": "ralph lauren,fall 2023 ready to wear", "1227": "ralph lauren,pre fall 2014", "1228": "ralph lauren,pre fall 2015", "1229": "ralph lauren,pre fall 2016", "1230": "ralph lauren,pre fall 2017", "1231": "ralph lauren,pre fall 2018", "1232": "ralph lauren,pre fall 2019", "1233": "ralph lauren,pre fall 2020", "1234": "ralph lauren,pre fall 2021", "1235": "ralph lauren,resort 2008", "1236": "ralph lauren,resort 2009", "1237": "ralph lauren,resort 2013", "1238": "ralph lauren,resort 2014", "1239": "ralph lauren,resort 2015", "1240": "ralph lauren,resort 2016", "1241": "ralph lauren,resort 2019", "1242": "ralph lauren,resort 2022", "1243": "ralph lauren,resort 2024", "1244": "ralph lauren,spring 2000 ready to wear", "1245": "ralph lauren,spring 2001 ready to wear", "1246": "ralph lauren,spring 2002 ready to wear", "1247": "ralph lauren,spring 2003 ready to wear", "1248": "ralph lauren,spring 2004 ready to wear", "1249": "ralph lauren,spring 2005 ready to wear", "1250": "ralph lauren,spring 2006 menswear", "1251": "ralph lauren,spring 2006 ready to wear", "1252": "ralph lauren,spring 2007 menswear", "1253": "ralph lauren,spring 2007 ready to wear", "1254": "ralph lauren,spring 2008 menswear", "1255": "ralph lauren,spring 2008 ready to wear", "1256": "ralph lauren,spring 2009 ready to wear", "1257": "ralph lauren,spring 2010 ready to wear", "1258": "ralph lauren,spring 2011 ready to wear", "1259": "ralph lauren,spring 2012 ready to wear", "1260": "ralph lauren,spring 2013 menswear", "1261": "ralph lauren,spring 2013 ready to wear", "1262": "ralph lauren,spring 2014 menswear", "1263": "ralph lauren,spring 2014 ready to wear", "1264": "ralph lauren,spring 2015 menswear", "1265": "ralph lauren,spring 2015 ready to wear", "1266": "ralph lauren,spring 2016 menswear", "1267": "ralph lauren,spring 2016 ready to wear", "1268": "ralph lauren,spring 2017 menswear", "1269": "ralph lauren,spring 2017 ready to wear", "1270": "ralph lauren,spring 2018 menswear", "1271": "ralph lauren,spring 2018 ready to wear", "1272": "ralph lauren,spring 2019 menswear", "1273": "ralph lauren,spring 2019 ready to wear", "1274": "ralph lauren,spring 2020 menswear", "1275": "ralph lauren,spring 2021 ready to wear", "1276": "ralph lauren,spring 2022 ready to wear", "1277": "ralph lauren,spring 2023 ready to wear", "1278": "ralph lauren,spring 2024 menswear", "1279": "ralph lauren,spring 2024 ready to wear", "1280": "saint laurent,fall 2000 ready to wear", "1281": "saint laurent,fall 2001 couture", "1282": "saint laurent,fall 2001 ready to wear", "1283": "saint laurent,fall 2002 ready to wear", "1284": "saint laurent,fall 2003 ready to wear", "1285": "saint laurent,fall 2004 ready to wear", "1286": "saint laurent,fall 2005 menswear", "1287": "saint laurent,fall 2005 ready to wear", "1288": "saint laurent,fall 2006 menswear", "1289": "saint laurent,fall 2006 ready to wear", "1290": "saint laurent,fall 2007 menswear", "1291": "saint laurent,fall 2007 ready to wear", "1292": "saint laurent,fall 2008 menswear", "1293": "saint laurent,fall 2008 ready to wear", "1294": "saint laurent,fall 2009 ready to wear", "1295": "saint laurent,fall 2010 menswear", "1296": "saint laurent,fall 2010 ready to wear", "1297": "saint laurent,fall 2011 menswear", "1298": "saint laurent,fall 2011 ready to wear", "1299": "saint laurent,fall 2012 menswear", "1300": "saint laurent,fall 2012 ready to wear", "1301": "saint laurent,fall 2013 menswear", "1302": "saint laurent,fall 2013 ready to wear", "1303": "saint laurent,fall 2014 menswear", "1304": "saint laurent,fall 2014 ready to wear", "1305": "saint laurent,fall 2015 menswear", "1306": "saint laurent,fall 2015 ready to wear", "1307": "saint laurent,fall 2016 menswear", "1308": "saint laurent,fall 2016 ready to wear", "1309": "saint laurent,fall 2017 ready to wear", "1310": "saint laurent,fall 2018 ready to wear", "1311": "saint laurent,fall 2019 menswear", "1312": "saint laurent,fall 2019 ready to wear", "1313": "saint laurent,fall 2020 ready to wear", "1314": "saint laurent,fall 2021 menswear", "1315": "saint laurent,fall 2021 ready to wear", "1316": "saint laurent,fall 2022 menswear", "1317": "saint laurent,fall 2022 ready to wear", "1318": "saint laurent,fall 2023 menswear", "1319": "saint laurent,fall 2023 ready to wear", "1320": "saint laurent,pre fall 2009", "1321": "saint laurent,pre fall 2010", "1322": "saint laurent,pre fall 2011", "1323": "saint laurent,pre fall 2012", "1324": "saint laurent,pre fall 2013", "1325": "saint laurent,pre fall 2016", "1326": "saint laurent,pre fall 2019", "1327": "saint laurent,pre fall 2020", "1328": "saint laurent,pre fall 2020 menswear", "1329": "saint laurent,pre fall 2021", "1330": "saint laurent,pre fall 2022", "1331": "saint laurent,pre fall 2023", "1332": "saint laurent,resort 2008", "1333": "saint laurent,resort 2010", "1334": "saint laurent,resort 2011", "1335": "saint laurent,resort 2012", "1336": "saint laurent,resort 2014", "1337": "saint laurent,resort 2020", "1338": "saint laurent,resort 2021", "1339": "saint laurent,resort 2022", "1340": "saint laurent,resort 2023", "1341": "saint laurent,spring 2000 ready to wear", "1342": "saint laurent,spring 2001 couture", "1343": "saint laurent,spring 2001 ready to wear", "1344": "saint laurent,spring 2002 couture", "1345": "saint laurent,spring 2002 ready to wear", "1346": "saint laurent,spring 2003 ready to wear", "1347": "saint laurent,spring 2004 ready to wear", "1348": "saint laurent,spring 2005 menswear", "1349": "saint laurent,spring 2005 ready to wear", "1350": "saint laurent,spring 2006 menswear", "1351": "saint laurent,spring 2006 ready to wear", "1352": "saint laurent,spring 2007 menswear", "1353": "saint laurent,spring 2007 ready to wear", "1354": "saint laurent,spring 2008 menswear", "1355": "saint laurent,spring 2008 ready to wear", "1356": "saint laurent,spring 2009 menswear", "1357": "saint laurent,spring 2009 ready to wear", "1358": "saint laurent,spring 2010 ready to wear", "1359": "saint laurent,spring 2011 menswear", "1360": "saint laurent,spring 2011 ready to wear", "1361": "saint laurent,spring 2012 menswear", "1362": "saint laurent,spring 2012 ready to wear", "1363": "saint laurent,spring 2013 ready to wear", "1364": "saint laurent,spring 2014 menswear", "1365": "saint laurent,spring 2014 ready to wear", "1366": "saint laurent,spring 2015 menswear", "1367": "saint laurent,spring 2015 ready to wear", "1368": "saint laurent,spring 2016 menswear", "1369": "saint laurent,spring 2016 ready to wear", "1370": "saint laurent,spring 2017 ready to wear", "1371": "saint laurent,spring 2018 ready to wear", "1372": "saint laurent,spring 2019 menswear", "1373": "saint laurent,spring 2019 ready to wear", "1374": "saint laurent,spring 2020 menswear", "1375": "saint laurent,spring 2020 ready to wear", "1376": "saint laurent,spring 2021 menswear", "1377": "saint laurent,spring 2021 ready to wear", "1378": "saint laurent,spring 2022 menswear", "1379": "saint laurent,spring 2022 ready to wear", "1380": "saint laurent,spring 2023 menswear", "1381": "saint laurent,spring 2023 ready to wear", "1382": "saint laurent,spring 2024 menswear", "1383": "saint laurent,spring 2024 ready to wear", "1384": "valentino,fall 2000 ready to wear", "1385": "valentino,fall 2001 couture", "1386": "valentino,fall 2001 ready to wear", "1387": "valentino,fall 2002 couture", "1388": "valentino,fall 2002 ready to wear", "1389": "valentino,fall 2003 couture", "1390": "valentino,fall 2003 ready to wear", "1391": "valentino,fall 2004 couture", "1392": "valentino,fall 2004 ready to wear", "1393": "valentino,fall 2005 couture", "1394": "valentino,fall 2005 menswear", "1395": "valentino,fall 2005 ready to wear", "1396": "valentino,fall 2006 couture", "1397": "valentino,fall 2006 menswear", "1398": "valentino,fall 2006 ready to wear", "1399": "valentino,fall 2007 couture", "1400": "valentino,fall 2007 menswear", "1401": "valentino,fall 2007 ready to wear", "1402": "valentino,fall 2008 couture", "1403": "valentino,fall 2008 menswear", "1404": "valentino,fall 2008 ready to wear", "1405": "valentino,fall 2009 couture", "1406": "valentino,fall 2009 ready to wear", "1407": "valentino,fall 2010 couture", "1408": "valentino,fall 2010 ready to wear", "1409": "valentino,fall 2011 couture", "1410": "valentino,fall 2011 ready to wear", "1411": "valentino,fall 2012 couture", "1412": "valentino,fall 2012 menswear", "1413": "valentino,fall 2012 ready to wear", "1414": "valentino,fall 2013 couture", "1415": "valentino,fall 2013 menswear", "1416": "valentino,fall 2013 ready to wear", "1417": "valentino,fall 2014 couture", "1418": "valentino,fall 2014 menswear", "1419": "valentino,fall 2014 ready to wear", "1420": "valentino,fall 2015 couture", "1421": "valentino,fall 2015 menswear", "1422": "valentino,fall 2015 ready to wear", "1423": "valentino,fall 2016 couture", "1424": "valentino,fall 2016 menswear", "1425": "valentino,fall 2016 ready to wear", "1426": "valentino,fall 2017 couture", "1427": "valentino,fall 2017 menswear", "1428": "valentino,fall 2017 ready to wear", "1429": "valentino,fall 2018 couture", "1430": "valentino,fall 2018 menswear", "1431": "valentino,fall 2018 ready to wear", "1432": "valentino,fall 2019 couture", "1433": "valentino,fall 2019 menswear", "1434": "valentino,fall 2019 ready to wear", "1435": "valentino,fall 2020 couture", "1436": "valentino,fall 2020 menswear", "1437": "valentino,fall 2020 ready to wear", "1438": "valentino,fall 2021 couture", "1439": "valentino,fall 2021 ready to wear", "1440": "valentino,fall 2022 couture", "1441": "valentino,fall 2022 ready to wear", "1442": "valentino,fall 2023 couture", "1443": "valentino,fall 2023 ready to wear", "1444": "valentino,pre fall 2008", "1445": "valentino,pre fall 2010", "1446": "valentino,pre fall 2011", "1447": "valentino,pre fall 2012", "1448": "valentino,pre fall 2013", "1449": "valentino,pre fall 2014", "1450": "valentino,pre fall 2015", "1451": "valentino,pre fall 2016", "1452": "valentino,pre fall 2017", "1453": "valentino,pre fall 2018", "1454": "valentino,pre fall 2019", "1455": "valentino,pre fall 2020", "1456": "valentino,pre fall 2021", "1457": "valentino,pre fall 2022", "1458": "valentino,pre fall 2023", "1459": "valentino,pre fall 2024", "1460": "valentino,resort 2008", "1461": "valentino,resort 2009", "1462": "valentino,resort 2011", "1463": "valentino,resort 2012", "1464": "valentino,resort 2013", "1465": "valentino,resort 2014", "1466": "valentino,resort 2015", "1467": "valentino,resort 2016", "1468": "valentino,resort 2017", "1469": "valentino,resort 2018", "1470": "valentino,resort 2019", "1471": "valentino,resort 2020", "1472": "valentino,resort 2021", "1473": "valentino,resort 2022", "1474": "valentino,resort 2023", "1475": "valentino,resort 2024", "1476": "valentino,spring 2000 ready to wear", "1477": "valentino,spring 2001 couture", "1478": "valentino,spring 2001 ready to wear", "1479": "valentino,spring 2002 couture", "1480": "valentino,spring 2002 ready to wear", "1481": "valentino,spring 2003 couture", "1482": "valentino,spring 2003 ready to wear", "1483": "valentino,spring 2004 couture", "1484": "valentino,spring 2004 ready to wear", "1485": "valentino,spring 2005 couture", "1486": "valentino,spring 2005 menswear", "1487": "valentino,spring 2005 ready to wear", "1488": "valentino,spring 2006 couture", "1489": "valentino,spring 2006 menswear", "1490": "valentino,spring 2006 ready to wear", "1491": "valentino,spring 2007 couture", "1492": "valentino,spring 2007 menswear", "1493": "valentino,spring 2007 ready to wear", "1494": "valentino,spring 2008 couture", "1495": "valentino,spring 2008 menswear", "1496": "valentino,spring 2008 ready to wear", "1497": "valentino,spring 2009 couture", "1498": "valentino,spring 2009 menswear", "1499": "valentino,spring 2009 ready to wear", "1500": "valentino,spring 2010 couture", "1501": "valentino,spring 2010 ready to wear", "1502": "valentino,spring 2011 couture", "1503": "valentino,spring 2011 ready to wear", "1504": "valentino,spring 2012 couture", "1505": "valentino,spring 2012 menswear", "1506": "valentino,spring 2012 ready to wear", "1507": "valentino,spring 2013 couture", "1508": "valentino,spring 2013 menswear", "1509": "valentino,spring 2013 ready to wear", "1510": "valentino,spring 2014 couture", "1511": "valentino,spring 2014 menswear", "1512": "valentino,spring 2014 ready to wear", "1513": "valentino,spring 2015 couture", "1514": "valentino,spring 2015 menswear", "1515": "valentino,spring 2015 ready to wear", "1516": "valentino,spring 2016 couture", "1517": "valentino,spring 2016 menswear", "1518": "valentino,spring 2016 ready to wear", "1519": "valentino,spring 2017 couture", "1520": "valentino,spring 2017 menswear", "1521": "valentino,spring 2017 ready to wear", "1522": "valentino,spring 2018 couture", "1523": "valentino,spring 2018 menswear", "1524": "valentino,spring 2018 ready to wear", "1525": "valentino,spring 2019 couture", "1526": "valentino,spring 2019 menswear", "1527": "valentino,spring 2019 ready to wear", "1528": "valentino,spring 2020 couture", "1529": "valentino,spring 2020 menswear", "1530": "valentino,spring 2020 ready to wear", "1531": "valentino,spring 2021 couture", "1532": "valentino,spring 2021 menswear", "1533": "valentino,spring 2021 ready to wear", "1534": "valentino,spring 2022 couture", "1535": "valentino,spring 2022 ready to wear", "1536": "valentino,spring 2023 couture", "1537": "valentino,spring 2023 ready to wear", "1538": "valentino,spring 2024 menswear", "1539": "versace by fendi,pre fall 2022", "1540": "versace,fall 1991 ready to wear", "1541": "versace,fall 1992 ready to wear", "1542": "versace,fall 1993 ready to wear", "1543": "versace,fall 1994 ready to wear", "1544": "versace,fall 1995 ready to wear", "1545": "versace,fall 1996 ready to wear", "1546": "versace,fall 1997 ready to wear", "1547": "versace,fall 2000 ready to wear", "1548": "versace,fall 2001 couture", "1549": "versace,fall 2001 ready to wear", "1550": "versace,fall 2002 couture", "1551": "versace,fall 2002 ready to wear", "1552": "versace,fall 2003 couture", "1553": "versace,fall 2003 ready to wear", "1554": "versace,fall 2004 ready to wear", "1555": "versace,fall 2005 menswear", "1556": "versace,fall 2005 ready to wear", "1557": "versace,fall 2006 menswear", "1558": "versace,fall 2006 ready to wear", "1559": "versace,fall 2007 menswear", "1560": "versace,fall 2007 ready to wear", "1561": "versace,fall 2008 menswear", "1562": "versace,fall 2008 ready to wear", "1563": "versace,fall 2009 ready to wear", "1564": "versace,fall 2010 menswear", "1565": "versace,fall 2010 ready to wear", "1566": "versace,fall 2011 menswear", "1567": "versace,fall 2011 ready to wear", "1568": "versace,fall 2012 menswear", "1569": "versace,fall 2012 ready to wear", "1570": "versace,fall 2013 menswear", "1571": "versace,fall 2013 ready to wear", "1572": "versace,fall 2014 menswear", "1573": "versace,fall 2014 ready to wear", "1574": "versace,fall 2015 menswear", "1575": "versace,fall 2015 ready to wear", "1576": "versace,fall 2016 menswear", "1577": "versace,fall 2016 ready to wear", "1578": "versace,fall 2017 menswear", "1579": "versace,fall 2017 ready to wear", "1580": "versace,fall 2018 menswear", "1581": "versace,fall 2018 ready to wear", "1582": "versace,fall 2019 menswear", "1583": "versace,fall 2019 ready to wear", "1584": "versace,fall 2020 menswear", "1585": "versace,fall 2020 ready to wear", "1586": "versace,fall 2021 ready to wear", "1587": "versace,fall 2022 menswear", "1588": "versace,fall 2022 ready to wear", "1589": "versace,fall 2023 ready to wear", "1590": "versace,pre fall 2008", "1591": "versace,pre fall 2009", "1592": "versace,pre fall 2010", "1593": "versace,pre fall 2011", "1594": "versace,pre fall 2012", "1595": "versace,pre fall 2013", "1596": "versace,pre fall 2014", "1597": "versace,pre fall 2015", "1598": "versace,pre fall 2016", "1599": "versace,pre fall 2017", "1600": "versace,pre fall 2018", "1601": "versace,pre fall 2019", "1602": "versace,pre fall 2020", "1603": "versace,pre fall 2021", "1604": "versace,pre fall 2022", "1605": "versace,pre fall 2022 menswear", "1606": "versace,pre fall 2023", "1607": "versace,resort 2008", "1608": "versace,resort 2009", "1609": "versace,resort 2010", "1610": "versace,resort 2011", "1611": "versace,resort 2012", "1612": "versace,resort 2013", "1613": "versace,resort 2014", "1614": "versace,resort 2015", "1615": "versace,resort 2016", "1616": "versace,resort 2017", "1617": "versace,resort 2018", "1618": "versace,resort 2019", "1619": "versace,resort 2020", "1620": "versace,resort 2021", "1621": "versace,resort 2022", "1622": "versace,resort 2023", "1623": "versace,spring 1991 ready to wear", "1624": "versace,spring 1992 ready to wear", "1625": "versace,spring 1993 ready to wear", "1626": "versace,spring 1994 ready to wear", "1627": "versace,spring 1995 ready to wear", "1628": "versace,spring 1996 ready to wear", "1629": "versace,spring 1997 ready to wear", "1630": "versace,spring 2000 ready to wear", "1631": "versace,spring 2001 couture", "1632": "versace,spring 2001 ready to wear", "1633": "versace,spring 2002 couture", "1634": "versace,spring 2002 ready to wear", "1635": "versace,spring 2003 couture", "1636": "versace,spring 2003 ready to wear", "1637": "versace,spring 2004 couture", "1638": "versace,spring 2004 ready to wear", "1639": "versace,spring 2005 menswear", "1640": "versace,spring 2005 ready to wear", "1641": "versace,spring 2006 menswear", "1642": "versace,spring 2006 ready to wear", "1643": "versace,spring 2007 menswear", "1644": "versace,spring 2007 ready to wear", "1645": "versace,spring 2008 couture", "1646": "versace,spring 2008 menswear", "1647": "versace,spring 2008 ready to wear", "1648": "versace,spring 2009 menswear", "1649": "versace,spring 2009 ready to wear", "1650": "versace,spring 2010 ready to wear", "1651": "versace,spring 2011 menswear", "1652": "versace,spring 2011 ready to wear", "1653": "versace,spring 2012 menswear", "1654": "versace,spring 2012 ready to wear", "1655": "versace,spring 2013 menswear", "1656": "versace,spring 2013 ready to wear", "1657": "versace,spring 2014 menswear", "1658": "versace,spring 2014 ready to wear", "1659": "versace,spring 2015 menswear", "1660": "versace,spring 2015 ready to wear", "1661": "versace,spring 2016 menswear", "1662": "versace,spring 2016 ready to wear", "1663": "versace,spring 2017 menswear", "1664": "versace,spring 2017 ready to wear", "1665": "versace,spring 2018 menswear", "1666": "versace,spring 2018 ready to wear", "1667": "versace,spring 2019 menswear", "1668": "versace,spring 2019 ready to wear", "1669": "versace,spring 2020 menswear", "1670": "versace,spring 2020 ready to wear", "1671": "versace,spring 2021 menswear", "1672": "versace,spring 2021 ready to wear", "1673": "versace,spring 2022 ready to wear", "1674": "versace,spring 2023 menswear", "1675": "versace,spring 2023 ready to wear", "1676": "versace,spring 2024 ready to wear"}}}}], "splits": [{"name": "train", "num_bytes": 2097138827.181, "num_examples": 87547}], "download_size": 2042963572, "dataset_size": 2097138827.181}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-12-28T18:52:15+00:00
[]
[]
TAGS #region-us
# vogue-runway-top15-512px Vogue Runway - 15 fashion houses - 1679 collections - 87,547 images Fashion Houses: Alexander McQueen, Armani, Balenciaga, Calvin Klein, Chanel, Dior, Fendi, Gucci, Hermes, Louis Vuitton, Prada, Ralph Lauren, Saint Laurent, Valentino, Versace. Images are maximum height 512 pixels. !image/jpeg !image/jpeg !image/jpeg
[ "# vogue-runway-top15-512px\n\n Vogue Runway\n- 15 fashion houses\n- 1679 collections\n- 87,547 images\n\nFashion Houses: Alexander McQueen, Armani, Balenciaga, Calvin Klein, Chanel, Dior, Fendi, Gucci, Hermes, Louis Vuitton, Prada, Ralph Lauren, Saint Laurent, Valentino, Versace.\n\nImages are maximum height 512 pixels.\n\n!image/jpeg\n\n!image/jpeg\n\n!image/jpeg" ]
[ "TAGS\n#region-us \n", "# vogue-runway-top15-512px\n\n Vogue Runway\n- 15 fashion houses\n- 1679 collections\n- 87,547 images\n\nFashion Houses: Alexander McQueen, Armani, Balenciaga, Calvin Klein, Chanel, Dior, Fendi, Gucci, Hermes, Louis Vuitton, Prada, Ralph Lauren, Saint Laurent, Valentino, Versace.\n\nImages are maximum height 512 pixels.\n\n!image/jpeg\n\n!image/jpeg\n\n!image/jpeg" ]
[ 6, 101 ]
[ "passage: TAGS\n#region-us \n# vogue-runway-top15-512px\n\n Vogue Runway\n- 15 fashion houses\n- 1679 collections\n- 87,547 images\n\nFashion Houses: Alexander McQueen, Armani, Balenciaga, Calvin Klein, Chanel, Dior, Fendi, Gucci, Hermes, Louis Vuitton, Prada, Ralph Lauren, Saint Laurent, Valentino, Versace.\n\nImages are maximum height 512 pixels.\n\n!image/jpeg\n\n!image/jpeg\n\n!image/jpeg" ]
e994c5d888a7d429b6d94ea9d50a4da5868711be
# Dataset Card for "autotrain-data-Liquid" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tschlarman/autotrain-data-Liquid
[ "region:us" ]
2023-12-28T05:53:37+00:00
{"dataset_info": {"features": [{"name": "autotrain_text", "dtype": "string"}, {"name": "autotrain_label", "dtype": {"class_label": {"names": {"0": 0, "1": 1}}}}], "splits": [{"name": "train", "num_bytes": 45906401, "num_examples": 140539}, {"name": "validation", "num_bytes": 11595942, "num_examples": 35135}], "download_size": 35819599, "dataset_size": 57502343}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}]}
2023-12-28T05:55:19+00:00
[]
[]
TAGS #region-us
# Dataset Card for "autotrain-data-Liquid" More Information needed
[ "# Dataset Card for \"autotrain-data-Liquid\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"autotrain-data-Liquid\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"autotrain-data-Liquid\"\n\nMore Information needed" ]
4418dffa3acdb1e1ce8cf8f465bddde2513a80f0
- test purpose
bartmao/test_dataset
[ "region:us" ]
2023-12-28T05:56:03+00:00
{}
2023-12-28T05:57:13+00:00
[]
[]
TAGS #region-us
- test purpose
[]
[ "TAGS\n#region-us \n" ]
[ 6 ]
[ "passage: TAGS\n#region-us \n" ]
58bdbe850e81d5cc791bd787d4fbe753d8b963fc
# Dataset Card for "opencpop_synth" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Codec-SUPERB/opencpop_synth
[ "region:us" ]
2023-12-28T05:58:23+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "original", "path": "data/original-*"}, {"split": "academicodec_hifi_16k_320d", "path": "data/academicodec_hifi_16k_320d-*"}, {"split": "academicodec_hifi_16k_320d_large_uni", "path": "data/academicodec_hifi_16k_320d_large_uni-*"}, {"split": "academicodec_hifi_24k_320d", "path": "data/academicodec_hifi_24k_320d-*"}, {"split": "dac_16k", "path": "data/dac_16k-*"}, {"split": "dac_24k", "path": "data/dac_24k-*"}, {"split": "dac_44k", "path": "data/dac_44k-*"}, {"split": "encodec_24k", "path": "data/encodec_24k-*"}, {"split": "funcodec_en_libritts_16k_gr1nq32ds320", "path": "data/funcodec_en_libritts_16k_gr1nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_gr8nq32ds320", "path": "data/funcodec_en_libritts_16k_gr8nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds320", "path": "data/funcodec_en_libritts_16k_nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds640", "path": "data/funcodec_en_libritts_16k_nq32ds640-*"}, {"split": "funcodec_zh_en_16k_nq32ds320", "path": "data/funcodec_zh_en_16k_nq32ds320-*"}, {"split": "funcodec_zh_en_16k_nq32ds640", "path": "data/funcodec_zh_en_16k_nq32ds640-*"}, {"split": "speech_tokenizer_16k", "path": "data/speech_tokenizer_16k-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 44100}}}, {"name": "id", "dtype": "string"}], "splits": [{"name": "original", "num_bytes": 2191682446.0, "num_examples": 100}, {"name": "academicodec_hifi_16k_320d", "num_bytes": 795138400.0, "num_examples": 100}, {"name": "academicodec_hifi_16k_320d_large_uni", "num_bytes": 795138400.0, "num_examples": 100}, {"name": "academicodec_hifi_24k_320d", "num_bytes": 1192724960.0, "num_examples": 100}, {"name": "dac_16k", "num_bytes": 795171921.0, "num_examples": 100}, {"name": "dac_24k", "num_bytes": 1192755131.0, "num_examples": 100}, {"name": "dac_44k", "num_bytes": 2191682947.0, "num_examples": 100}, {"name": "encodec_24k", "num_bytes": 1192755327.0, "num_examples": 100}, {"name": "funcodec_en_libritts_16k_gr1nq32ds320", "num_bytes": 795162221.0, "num_examples": 100}, {"name": "funcodec_en_libritts_16k_gr8nq32ds320", "num_bytes": 795162221.0, "num_examples": 100}, {"name": "funcodec_en_libritts_16k_nq32ds320", "num_bytes": 795162221.0, "num_examples": 100}, {"name": "funcodec_en_libritts_16k_nq32ds640", "num_bytes": 795162221.0, "num_examples": 100}, {"name": "funcodec_zh_en_16k_nq32ds320", "num_bytes": 795162221.0, "num_examples": 100}, {"name": "funcodec_zh_en_16k_nq32ds640", "num_bytes": 795162221.0, "num_examples": 100}, {"name": "speech_tokenizer_16k", "num_bytes": 795201133.0, "num_examples": 100}], "download_size": 12008938322, "dataset_size": 15913223991.0}}
2023-12-28T06:16:05+00:00
[]
[]
TAGS #region-us
# Dataset Card for "opencpop_synth" More Information needed
[ "# Dataset Card for \"opencpop_synth\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"opencpop_synth\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"opencpop_synth\"\n\nMore Information needed" ]
c2d90f9bb9da7b9c7ab858085da9ca44fee7bd39
# Dataset Card for "hellaswag" ## Table of Contents - [Dataset Description](#dataset-description) - [Dataset Summary](#dataset-summary) - [Supported Tasks and Leaderboards](#supported-tasks-and-leaderboards) - [Languages](#languages) - [Dataset Structure](#dataset-structure) - [Data Instances](#data-instances) - [Data Fields](#data-fields) - [Data Splits](#data-splits) - [Dataset Creation](#dataset-creation) - [Curation Rationale](#curation-rationale) - [Source Data](#source-data) - [Annotations](#annotations) - [Personal and Sensitive Information](#personal-and-sensitive-information) - [Considerations for Using the Data](#considerations-for-using-the-data) - [Social Impact of Dataset](#social-impact-of-dataset) - [Discussion of Biases](#discussion-of-biases) - [Other Known Limitations](#other-known-limitations) - [Additional Information](#additional-information) - [Dataset Curators](#dataset-curators) - [Licensing Information](#licensing-information) - [Citation Information](#citation-information) - [Contributions](#contributions) ## Dataset Description - **Homepage:** [https://rowanzellers.com/hellaswag/](https://rowanzellers.com/hellaswag/) - **Repository:** [https://github.com/rowanz/hellaswag/](https://github.com/rowanz/hellaswag/) - **Paper:** [HellaSwag: Can a Machine Really Finish Your Sentence?](https://arxiv.org/abs/1905.07830) - **Point of Contact:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) - **Size of downloaded dataset files:** 71.49 MB - **Size of the generated dataset:** 65.32 MB - **Total amount of disk used:** 136.81 MB ### Dataset Summary HellaSwag: Can a Machine Really Finish Your Sentence? is a new dataset for commonsense NLI. A paper was published at ACL2019. ### Supported Tasks and Leaderboards [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Languages [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Dataset Structure ### Data Instances #### default - **Size of downloaded dataset files:** 71.49 MB - **Size of the generated dataset:** 65.32 MB - **Total amount of disk used:** 136.81 MB An example of 'train' looks as follows. ``` This example was too long and was cropped: { "activity_label": "Removing ice from car", "ctx": "Then, the man writes over the snow covering the window of a car, and a woman wearing winter clothes smiles. then", "ctx_a": "Then, the man writes over the snow covering the window of a car, and a woman wearing winter clothes smiles.", "ctx_b": "then", "endings": "[\", the man adds wax to the windshield and cuts it.\", \", a person board a ski lift, while two men supporting the head of the per...", "ind": 4, "label": "3", "source_id": "activitynet~v_-1IBHYS3L-Y", "split": "train", "split_type": "indomain" } ``` ### Data Fields The data fields are the same among all splits. #### default - `ind`: a `int32` feature. - `activity_label`: a `string` feature. - `ctx_a`: a `string` feature. - `ctx_b`: a `string` feature. - `ctx`: a `string` feature. - `endings`: a `list` of `string` features. - `source_id`: a `string` feature. - `split`: a `string` feature. - `split_type`: a `string` feature. - `label`: a `string` feature. ### Data Splits | name |train|validation|test | |-------|----:|---------:|----:| |default|39905| 10042|10003| ## Dataset Creation ### Curation Rationale [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Source Data #### Initial Data Collection and Normalization [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the source language producers? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Annotations #### Annotation process [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) #### Who are the annotators? [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Personal and Sensitive Information [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Considerations for Using the Data ### Social Impact of Dataset [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Discussion of Biases [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Other Known Limitations [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ## Additional Information ### Dataset Curators [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) ### Licensing Information MIT https://github.com/rowanz/hellaswag/blob/master/LICENSE ### Citation Information ``` @inproceedings{zellers2019hellaswag, title={HellaSwag: Can a Machine Really Finish Your Sentence?}, author={Zellers, Rowan and Holtzman, Ari and Bisk, Yonatan and Farhadi, Ali and Choi, Yejin}, booktitle ={Proceedings of the 57th Annual Meeting of the Association for Computational Linguistics}, year={2019} } ``` ### Contributions Thanks to [@albertvillanova](https://github.com/albertvillanova), [@mariamabarham](https://github.com/mariamabarham), [@thomwolf](https://github.com/thomwolf), [@patrickvonplaten](https://github.com/patrickvonplaten), [@lewtun](https://github.com/lewtun) for adding this dataset.
ycsong-eugene/syc-hellaswag2
[ "language:en", "arxiv:1905.07830", "region:us" ]
2023-12-28T06:27:42+00:00
{"language": ["en"], "paperswithcode_id": "hellaswag", "pretty_name": "HellaSwag", "dataset_info": {"features": [{"name": "ind", "dtype": "int32"}, {"name": "activity_label", "dtype": "string"}, {"name": "ctx_a", "dtype": "string"}, {"name": "ctx_b", "dtype": "string"}, {"name": "ctx", "dtype": "string"}, {"name": "endings", "sequence": "string"}, {"name": "source_id", "dtype": "string"}, {"name": "split", "dtype": "string"}, {"name": "split_type", "dtype": "string"}, {"name": "label", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 43232624, "num_examples": 39905}, {"name": "test", "num_bytes": 10791853, "num_examples": 10003}, {"name": "validation", "num_bytes": 11175717, "num_examples": 10042}], "download_size": 71494896, "dataset_size": 65200194}}
2023-12-28T08:29:13+00:00
[ "1905.07830" ]
[ "en" ]
TAGS #language-English #arxiv-1905.07830 #region-us
Dataset Card for "hellaswag" ============================ Table of Contents ----------------- * Dataset Description + Dataset Summary + Supported Tasks and Leaderboards + Languages * Dataset Structure + Data Instances + Data Fields + Data Splits * Dataset Creation + Curation Rationale + Source Data + Annotations + Personal and Sensitive Information * Considerations for Using the Data + Social Impact of Dataset + Discussion of Biases + Other Known Limitations * Additional Information + Dataset Curators + Licensing Information + Citation Information + Contributions Dataset Description ------------------- * Homepage: URL * Repository: URL * Paper: HellaSwag: Can a Machine Really Finish Your Sentence? * Point of Contact: * Size of downloaded dataset files: 71.49 MB * Size of the generated dataset: 65.32 MB * Total amount of disk used: 136.81 MB ### Dataset Summary HellaSwag: Can a Machine Really Finish Your Sentence? is a new dataset for commonsense NLI. A paper was published at ACL2019. ### Supported Tasks and Leaderboards ### Languages Dataset Structure ----------------- ### Data Instances #### default * Size of downloaded dataset files: 71.49 MB * Size of the generated dataset: 65.32 MB * Total amount of disk used: 136.81 MB An example of 'train' looks as follows. ### Data Fields The data fields are the same among all splits. #### default * 'ind': a 'int32' feature. * 'activity\_label': a 'string' feature. * 'ctx\_a': a 'string' feature. * 'ctx\_b': a 'string' feature. * 'ctx': a 'string' feature. * 'endings': a 'list' of 'string' features. * 'source\_id': a 'string' feature. * 'split': a 'string' feature. * 'split\_type': a 'string' feature. * 'label': a 'string' feature. ### Data Splits Dataset Creation ---------------- ### Curation Rationale ### Source Data #### Initial Data Collection and Normalization #### Who are the source language producers? ### Annotations #### Annotation process #### Who are the annotators? ### Personal and Sensitive Information Considerations for Using the Data --------------------------------- ### Social Impact of Dataset ### Discussion of Biases ### Other Known Limitations Additional Information ---------------------- ### Dataset Curators ### Licensing Information MIT URL ### Contributions Thanks to @albertvillanova, @mariamabarham, @thomwolf, @patrickvonplaten, @lewtun for adding this dataset.
[ "### Dataset Summary\n\n\nHellaSwag: Can a Machine Really Finish Your Sentence? is a new dataset for commonsense NLI. A paper was published at ACL2019.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 71.49 MB\n* Size of the generated dataset: 65.32 MB\n* Total amount of disk used: 136.81 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'ind': a 'int32' feature.\n* 'activity\\_label': a 'string' feature.\n* 'ctx\\_a': a 'string' feature.\n* 'ctx\\_b': a 'string' feature.\n* 'ctx': a 'string' feature.\n* 'endings': a 'list' of 'string' features.\n* 'source\\_id': a 'string' feature.\n* 'split': a 'string' feature.\n* 'split\\_type': a 'string' feature.\n* 'label': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nMIT URL", "### Contributions\n\n\nThanks to @albertvillanova, @mariamabarham, @thomwolf, @patrickvonplaten, @lewtun for adding this dataset." ]
[ "TAGS\n#language-English #arxiv-1905.07830 #region-us \n", "### Dataset Summary\n\n\nHellaSwag: Can a Machine Really Finish Your Sentence? is a new dataset for commonsense NLI. A paper was published at ACL2019.", "### Supported Tasks and Leaderboards", "### Languages\n\n\nDataset Structure\n-----------------", "### Data Instances", "#### default\n\n\n* Size of downloaded dataset files: 71.49 MB\n* Size of the generated dataset: 65.32 MB\n* Total amount of disk used: 136.81 MB\n\n\nAn example of 'train' looks as follows.", "### Data Fields\n\n\nThe data fields are the same among all splits.", "#### default\n\n\n* 'ind': a 'int32' feature.\n* 'activity\\_label': a 'string' feature.\n* 'ctx\\_a': a 'string' feature.\n* 'ctx\\_b': a 'string' feature.\n* 'ctx': a 'string' feature.\n* 'endings': a 'list' of 'string' features.\n* 'source\\_id': a 'string' feature.\n* 'split': a 'string' feature.\n* 'split\\_type': a 'string' feature.\n* 'label': a 'string' feature.", "### Data Splits\n\n\n\nDataset Creation\n----------------", "### Curation Rationale", "### Source Data", "#### Initial Data Collection and Normalization", "#### Who are the source language producers?", "### Annotations", "#### Annotation process", "#### Who are the annotators?", "### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------", "### Social Impact of Dataset", "### Discussion of Biases", "### Other Known Limitations\n\n\nAdditional Information\n----------------------", "### Dataset Curators", "### Licensing Information\n\n\nMIT URL", "### Contributions\n\n\nThanks to @albertvillanova, @mariamabarham, @thomwolf, @patrickvonplaten, @lewtun for adding this dataset." ]
[ 19, 42, 10, 11, 6, 52, 17, 140, 11, 7, 4, 10, 10, 5, 5, 9, 18, 7, 8, 14, 6, 8, 40 ]
[ "passage: TAGS\n#language-English #arxiv-1905.07830 #region-us \n### Dataset Summary\n\n\nHellaSwag: Can a Machine Really Finish Your Sentence? is a new dataset for commonsense NLI. A paper was published at ACL2019.### Supported Tasks and Leaderboards### Languages\n\n\nDataset Structure\n-----------------### Data Instances#### default\n\n\n* Size of downloaded dataset files: 71.49 MB\n* Size of the generated dataset: 65.32 MB\n* Total amount of disk used: 136.81 MB\n\n\nAn example of 'train' looks as follows.### Data Fields\n\n\nThe data fields are the same among all splits.#### default\n\n\n* 'ind': a 'int32' feature.\n* 'activity\\_label': a 'string' feature.\n* 'ctx\\_a': a 'string' feature.\n* 'ctx\\_b': a 'string' feature.\n* 'ctx': a 'string' feature.\n* 'endings': a 'list' of 'string' features.\n* 'source\\_id': a 'string' feature.\n* 'split': a 'string' feature.\n* 'split\\_type': a 'string' feature.\n* 'label': a 'string' feature.### Data Splits\n\n\n\nDataset Creation\n----------------### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information\n\n\nConsiderations for Using the Data\n---------------------------------### Social Impact of Dataset### Discussion of Biases### Other Known Limitations\n\n\nAdditional Information\n----------------------### Dataset Curators### Licensing Information\n\n\nMIT URL### Contributions\n\n\nThanks to @albertvillanova, @mariamabarham, @thomwolf, @patrickvonplaten, @lewtun for adding this dataset." ]
04040350bceb85dff48cda7ec457c329d0680edc
This is a recreation of the [tulu-v2-sft-mixture](https://huggingface.co/datasets/allenai/tulu-v2-sft-mixture), **without** splitting ShareGPT dataset into chunks of max 4096 tokens. This might be interesting to people who are doing long-context finetuning. Please refer to the original tulu-v2-sft-mixture for the details of this dataset mixture. ### License We are releasing this dataset under the terms of [ODC-BY](https://opendatacommons.org/licenses/by/1-0/). By using this, you are also bound by the [Common Crawl terms of use](https://commoncrawl.org/terms-of-use/) in respect of the content contained in the dataset.
allenai/tulu-v2-sft-long-mixture
[ "task_categories:text-generation", "task_categories:question-answering", "size_categories:100K<n<1M", "language:en", "license:odc-by", "region:us" ]
2023-12-28T07:01:05+00:00
{"language": ["en"], "license": "odc-by", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation", "question-answering"]}
2023-12-28T07:34:16+00:00
[]
[ "en" ]
TAGS #task_categories-text-generation #task_categories-question-answering #size_categories-100K<n<1M #language-English #license-odc-by #region-us
This is a recreation of the tulu-v2-sft-mixture, without splitting ShareGPT dataset into chunks of max 4096 tokens. This might be interesting to people who are doing long-context finetuning. Please refer to the original tulu-v2-sft-mixture for the details of this dataset mixture. ### License We are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset.
[ "### License\nWe are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset." ]
[ "TAGS\n#task_categories-text-generation #task_categories-question-answering #size_categories-100K<n<1M #language-English #license-odc-by #region-us \n", "### License\nWe are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset." ]
[ 53, 49 ]
[ "passage: TAGS\n#task_categories-text-generation #task_categories-question-answering #size_categories-100K<n<1M #language-English #license-odc-by #region-us \n### License\nWe are releasing this dataset under the terms of ODC-BY. By using this, you are also bound by the Common Crawl terms of use in respect of the content contained in the dataset." ]
ffdd68250bb7f54384d769c0e00801969ba5bb83
# Dataset Card for "code_t5_plus_fine_tuning_complete_dataset_v2" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
hemantk089/code_t5_plus_fine_tuning_complete_dataset_v2
[ "region:us" ]
2023-12-28T07:19:06+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1018984, "num_examples": 2433}, {"name": "test", "num_bytes": 246238, "num_examples": 608}], "download_size": 355450, "dataset_size": 1265222}}
2023-12-28T07:19:08+00:00
[]
[]
TAGS #region-us
# Dataset Card for "code_t5_plus_fine_tuning_complete_dataset_v2" More Information needed
[ "# Dataset Card for \"code_t5_plus_fine_tuning_complete_dataset_v2\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"code_t5_plus_fine_tuning_complete_dataset_v2\"\n\nMore Information needed" ]
[ 6, 30 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"code_t5_plus_fine_tuning_complete_dataset_v2\"\n\nMore Information needed" ]
1e3bb2f99d64d8df2f8edfd47ac388458c27cbc0
# Dataset Card for "llama2_7b_fine_tuning_all_tasks_v1" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
hemantk089/llama2_7b_fine_tuning_all_tasks_v1
[ "region:us" ]
2023-12-28T07:23:17+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1018984, "num_examples": 2433}, {"name": "test", "num_bytes": 246238, "num_examples": 608}], "download_size": 356054, "dataset_size": 1265222}}
2023-12-28T07:23:19+00:00
[]
[]
TAGS #region-us
# Dataset Card for "llama2_7b_fine_tuning_all_tasks_v1" More Information needed
[ "# Dataset Card for \"llama2_7b_fine_tuning_all_tasks_v1\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"llama2_7b_fine_tuning_all_tasks_v1\"\n\nMore Information needed" ]
[ 6, 29 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"llama2_7b_fine_tuning_all_tasks_v1\"\n\nMore Information needed" ]
6cdb0e46a11f7879097fe135b8fc052ac7d2840b
This dataset is a sample used to learn how to share dataset to the Hub.
lijianbin/test_script
[ "task_categories:text-classification", "region:us" ]
2023-12-28T07:30:42+00:00
{"task_categories": ["text-classification"]}
2023-12-29T08:42:01+00:00
[]
[]
TAGS #task_categories-text-classification #region-us
This dataset is a sample used to learn how to share dataset to the Hub.
[]
[ "TAGS\n#task_categories-text-classification #region-us \n" ]
[ 17 ]
[ "passage: TAGS\n#task_categories-text-classification #region-us \n" ]
3611d35c928bbcccade0bc02482fb6fd4e3f00af
# Dataset Card for Evaluation run of xaviviro/FLAMA-0.1-3B <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [xaviviro/FLAMA-0.1-3B](https://huggingface.co/xaviviro/FLAMA-0.1-3B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_xaviviro__FLAMA-0.1-3B", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-28T07:59:14.342619](https://huggingface.co/datasets/open-llm-leaderboard/details_xaviviro__FLAMA-0.1-3B/blob/main/results_2023-12-28T07-59-14.342619.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.27494961706906274, "acc_stderr": 0.03143686192639473, "acc_norm": 0.2760581114123852, "acc_norm_stderr": 0.032193127558020423, "mc1": 0.23255813953488372, "mc1_stderr": 0.014789157531080503, "mc2": 0.3719034123619623, "mc2_stderr": 0.014194966399461809 }, "harness|arc:challenge|25": { "acc": 0.3916382252559727, "acc_stderr": 0.014264122124938217, "acc_norm": 0.41723549488054607, "acc_norm_stderr": 0.01440982551840308 }, "harness|hellaswag|10": { "acc": 0.5292770364469229, "acc_stderr": 0.004981220135882323, "acc_norm": 0.7141007767377017, "acc_norm_stderr": 0.0045091819193228385 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.25, "acc_stderr": 0.04351941398892446, "acc_norm": 0.25, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.2962962962962963, "acc_stderr": 0.03944624162501116, "acc_norm": 0.2962962962962963, "acc_norm_stderr": 0.03944624162501116 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.2631578947368421, "acc_stderr": 0.03583496176361063, "acc_norm": 0.2631578947368421, "acc_norm_stderr": 0.03583496176361063 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.26, "acc_stderr": 0.04408440022768079, "acc_norm": 0.26, "acc_norm_stderr": 0.04408440022768079 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.2679245283018868, "acc_stderr": 0.027257260322494845, "acc_norm": 0.2679245283018868, "acc_norm_stderr": 0.027257260322494845 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.24305555555555555, "acc_stderr": 0.0358687928008034, "acc_norm": 0.24305555555555555, "acc_norm_stderr": 0.0358687928008034 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.24, "acc_stderr": 0.04292346959909283, "acc_norm": 0.24, "acc_norm_stderr": 0.04292346959909283 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.18, "acc_stderr": 0.038612291966536955, "acc_norm": 0.18, "acc_norm_stderr": 0.038612291966536955 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.33, "acc_stderr": 0.047258156262526045, "acc_norm": 0.33, "acc_norm_stderr": 0.047258156262526045 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.19653179190751446, "acc_stderr": 0.030299574664788147, "acc_norm": 0.19653179190751446, "acc_norm_stderr": 0.030299574664788147 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.18627450980392157, "acc_stderr": 0.03873958714149351, "acc_norm": 0.18627450980392157, "acc_norm_stderr": 0.03873958714149351 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.25, "acc_stderr": 0.04351941398892446, "acc_norm": 0.25, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.33191489361702126, "acc_stderr": 0.030783736757745647, "acc_norm": 0.33191489361702126, "acc_norm_stderr": 0.030783736757745647 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.23684210526315788, "acc_stderr": 0.039994238792813344, "acc_norm": 0.23684210526315788, "acc_norm_stderr": 0.039994238792813344 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.2827586206896552, "acc_stderr": 0.03752833958003336, "acc_norm": 0.2827586206896552, "acc_norm_stderr": 0.03752833958003336 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.28835978835978837, "acc_stderr": 0.02333065405453591, "acc_norm": 0.28835978835978837, "acc_norm_stderr": 0.02333065405453591 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.23809523809523808, "acc_stderr": 0.038095238095238106, "acc_norm": 0.23809523809523808, "acc_norm_stderr": 0.038095238095238106 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.31, "acc_stderr": 0.04648231987117316, "acc_norm": 0.31, "acc_norm_stderr": 0.04648231987117316 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.2645161290322581, "acc_stderr": 0.02509189237885928, "acc_norm": 0.2645161290322581, "acc_norm_stderr": 0.02509189237885928 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.2660098522167488, "acc_stderr": 0.03108982600293753, "acc_norm": 0.2660098522167488, "acc_norm_stderr": 0.03108982600293753 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.25, "acc_stderr": 0.04351941398892446, "acc_norm": 0.25, "acc_norm_stderr": 0.04351941398892446 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.3090909090909091, "acc_stderr": 0.036085410115739666, "acc_norm": 0.3090909090909091, "acc_norm_stderr": 0.036085410115739666 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.24242424242424243, "acc_stderr": 0.030532892233932036, "acc_norm": 0.24242424242424243, "acc_norm_stderr": 0.030532892233932036 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.23316062176165803, "acc_stderr": 0.03051611137147601, "acc_norm": 0.23316062176165803, "acc_norm_stderr": 0.03051611137147601 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.25384615384615383, "acc_stderr": 0.022066054378726257, "acc_norm": 0.25384615384615383, "acc_norm_stderr": 0.022066054378726257 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.24814814814814815, "acc_stderr": 0.0263357394040558, "acc_norm": 0.24814814814814815, "acc_norm_stderr": 0.0263357394040558 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.24369747899159663, "acc_stderr": 0.02788682807838056, "acc_norm": 0.24369747899159663, "acc_norm_stderr": 0.02788682807838056 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.2582781456953642, "acc_stderr": 0.035737053147634576, "acc_norm": 0.2582781456953642, "acc_norm_stderr": 0.035737053147634576 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.24036697247706423, "acc_stderr": 0.01832060732096407, "acc_norm": 0.24036697247706423, "acc_norm_stderr": 0.01832060732096407 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.23148148148148148, "acc_stderr": 0.02876511171804694, "acc_norm": 0.23148148148148148, "acc_norm_stderr": 0.02876511171804694 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.27450980392156865, "acc_stderr": 0.03132179803083292, "acc_norm": 0.27450980392156865, "acc_norm_stderr": 0.03132179803083292 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.22784810126582278, "acc_stderr": 0.02730348459906942, "acc_norm": 0.22784810126582278, "acc_norm_stderr": 0.02730348459906942 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.36771300448430494, "acc_stderr": 0.03236198350928275, "acc_norm": 0.36771300448430494, "acc_norm_stderr": 0.03236198350928275 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.26717557251908397, "acc_stderr": 0.03880848301082396, "acc_norm": 0.26717557251908397, "acc_norm_stderr": 0.03880848301082396 }, "harness|hendrycksTest-international_law|5": { "acc": 0.24793388429752067, "acc_stderr": 0.03941897526516301, "acc_norm": 0.24793388429752067, "acc_norm_stderr": 0.03941897526516301 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.2962962962962963, "acc_stderr": 0.04414343666854933, "acc_norm": 0.2962962962962963, "acc_norm_stderr": 0.04414343666854933 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.25153374233128833, "acc_stderr": 0.034089978868575295, "acc_norm": 0.25153374233128833, "acc_norm_stderr": 0.034089978868575295 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.26785714285714285, "acc_stderr": 0.04203277291467764, "acc_norm": 0.26785714285714285, "acc_norm_stderr": 0.04203277291467764 }, "harness|hendrycksTest-management|5": { "acc": 0.2912621359223301, "acc_stderr": 0.04498676320572921, "acc_norm": 0.2912621359223301, "acc_norm_stderr": 0.04498676320572921 }, "harness|hendrycksTest-marketing|5": { "acc": 0.26495726495726496, "acc_stderr": 0.02891120880274947, "acc_norm": 0.26495726495726496, "acc_norm_stderr": 0.02891120880274947 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.27, "acc_stderr": 0.04461960433384741, "acc_norm": 0.27, "acc_norm_stderr": 0.04461960433384741 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.2950191570881226, "acc_stderr": 0.01630836377293272, "acc_norm": 0.2950191570881226, "acc_norm_stderr": 0.01630836377293272 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.24855491329479767, "acc_stderr": 0.023267528432100174, "acc_norm": 0.24855491329479767, "acc_norm_stderr": 0.023267528432100174 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.24134078212290502, "acc_stderr": 0.014310999547961459, "acc_norm": 0.24134078212290502, "acc_norm_stderr": 0.014310999547961459 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.24183006535947713, "acc_stderr": 0.024518195641879334, "acc_norm": 0.24183006535947713, "acc_norm_stderr": 0.024518195641879334 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.2733118971061093, "acc_stderr": 0.02531176597542612, "acc_norm": 0.2733118971061093, "acc_norm_stderr": 0.02531176597542612 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.28703703703703703, "acc_stderr": 0.02517104191530968, "acc_norm": 0.28703703703703703, "acc_norm_stderr": 0.02517104191530968 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.28368794326241137, "acc_stderr": 0.02689170942834396, "acc_norm": 0.28368794326241137, "acc_norm_stderr": 0.02689170942834396 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.242503259452412, "acc_stderr": 0.010946570966348768, "acc_norm": 0.242503259452412, "acc_norm_stderr": 0.010946570966348768 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.20955882352941177, "acc_stderr": 0.02472311040767705, "acc_norm": 0.20955882352941177, "acc_norm_stderr": 0.02472311040767705 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.2647058823529412, "acc_stderr": 0.01784808957491323, "acc_norm": 0.2647058823529412, "acc_norm_stderr": 0.01784808957491323 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.36363636363636365, "acc_stderr": 0.04607582090719976, "acc_norm": 0.36363636363636365, "acc_norm_stderr": 0.04607582090719976 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.3183673469387755, "acc_stderr": 0.029822533793982062, "acc_norm": 0.3183673469387755, "acc_norm_stderr": 0.029822533793982062 }, "harness|hendrycksTest-sociology|5": { "acc": 0.25870646766169153, "acc_stderr": 0.030965903123573054, "acc_norm": 0.25870646766169153, "acc_norm_stderr": 0.030965903123573054 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.28, "acc_stderr": 0.04512608598542127, "acc_norm": 0.28, "acc_norm_stderr": 0.04512608598542127 }, "harness|hendrycksTest-virology|5": { "acc": 0.29518072289156627, "acc_stderr": 0.035509201856896294, "acc_norm": 0.29518072289156627, "acc_norm_stderr": 0.035509201856896294 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.3333333333333333, "acc_stderr": 0.03615507630310935, "acc_norm": 0.3333333333333333, "acc_norm_stderr": 0.03615507630310935 }, "harness|truthfulqa:mc|0": { "mc1": 0.23255813953488372, "mc1_stderr": 0.014789157531080503, "mc2": 0.3719034123619623, "mc2_stderr": 0.014194966399461809 }, "harness|winogrande|5": { "acc": 0.665351223362273, "acc_stderr": 0.013261823629558366 }, "harness|gsm8k|5": { "acc": 0.029567854435178165, "acc_stderr": 0.004665893134220805 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_xaviviro__FLAMA-0.1-3B
[ "region:us" ]
2023-12-28T08:00:57+00:00
{"pretty_name": "Evaluation run of xaviviro/FLAMA-0.1-3B", "dataset_summary": "Dataset automatically created during the evaluation run of model [xaviviro/FLAMA-0.1-3B](https://huggingface.co/xaviviro/FLAMA-0.1-3B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_xaviviro__FLAMA-0.1-3B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-28T07:59:14.342619](https://huggingface.co/datasets/open-llm-leaderboard/details_xaviviro__FLAMA-0.1-3B/blob/main/results_2023-12-28T07-59-14.342619.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.27494961706906274,\n \"acc_stderr\": 0.03143686192639473,\n \"acc_norm\": 0.2760581114123852,\n \"acc_norm_stderr\": 0.032193127558020423,\n \"mc1\": 0.23255813953488372,\n \"mc1_stderr\": 0.014789157531080503,\n \"mc2\": 0.3719034123619623,\n \"mc2_stderr\": 0.014194966399461809\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.3916382252559727,\n \"acc_stderr\": 0.014264122124938217,\n \"acc_norm\": 0.41723549488054607,\n \"acc_norm_stderr\": 0.01440982551840308\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.5292770364469229,\n \"acc_stderr\": 0.004981220135882323,\n \"acc_norm\": 0.7141007767377017,\n \"acc_norm_stderr\": 0.0045091819193228385\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.2962962962962963,\n \"acc_stderr\": 0.03944624162501116,\n \"acc_norm\": 0.2962962962962963,\n \"acc_norm_stderr\": 0.03944624162501116\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.2631578947368421,\n \"acc_stderr\": 0.03583496176361063,\n \"acc_norm\": 0.2631578947368421,\n \"acc_norm_stderr\": 0.03583496176361063\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.04408440022768079,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.04408440022768079\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.2679245283018868,\n \"acc_stderr\": 0.027257260322494845,\n \"acc_norm\": 0.2679245283018868,\n \"acc_norm_stderr\": 0.027257260322494845\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.24305555555555555,\n \"acc_stderr\": 0.0358687928008034,\n \"acc_norm\": 0.24305555555555555,\n \"acc_norm_stderr\": 0.0358687928008034\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.24,\n \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\": 0.24,\n \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.18,\n \"acc_stderr\": 0.038612291966536955,\n \"acc_norm\": 0.18,\n \"acc_norm_stderr\": 0.038612291966536955\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.19653179190751446,\n \"acc_stderr\": 0.030299574664788147,\n \"acc_norm\": 0.19653179190751446,\n \"acc_norm_stderr\": 0.030299574664788147\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.18627450980392157,\n \"acc_stderr\": 0.03873958714149351,\n \"acc_norm\": 0.18627450980392157,\n \"acc_norm_stderr\": 0.03873958714149351\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.33191489361702126,\n \"acc_stderr\": 0.030783736757745647,\n \"acc_norm\": 0.33191489361702126,\n \"acc_norm_stderr\": 0.030783736757745647\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.23684210526315788,\n \"acc_stderr\": 0.039994238792813344,\n \"acc_norm\": 0.23684210526315788,\n \"acc_norm_stderr\": 0.039994238792813344\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.2827586206896552,\n \"acc_stderr\": 0.03752833958003336,\n \"acc_norm\": 0.2827586206896552,\n \"acc_norm_stderr\": 0.03752833958003336\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.28835978835978837,\n \"acc_stderr\": 0.02333065405453591,\n \"acc_norm\": 0.28835978835978837,\n \"acc_norm_stderr\": 0.02333065405453591\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.23809523809523808,\n \"acc_stderr\": 0.038095238095238106,\n \"acc_norm\": 0.23809523809523808,\n \"acc_norm_stderr\": 0.038095238095238106\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.2645161290322581,\n \"acc_stderr\": 0.02509189237885928,\n \"acc_norm\": 0.2645161290322581,\n \"acc_norm_stderr\": 0.02509189237885928\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.2660098522167488,\n \"acc_stderr\": 0.03108982600293753,\n \"acc_norm\": 0.2660098522167488,\n \"acc_norm_stderr\": 0.03108982600293753\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.3090909090909091,\n \"acc_stderr\": 0.036085410115739666,\n \"acc_norm\": 0.3090909090909091,\n \"acc_norm_stderr\": 0.036085410115739666\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.24242424242424243,\n \"acc_stderr\": 0.030532892233932036,\n \"acc_norm\": 0.24242424242424243,\n \"acc_norm_stderr\": 0.030532892233932036\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.23316062176165803,\n \"acc_stderr\": 0.03051611137147601,\n \"acc_norm\": 0.23316062176165803,\n \"acc_norm_stderr\": 0.03051611137147601\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.25384615384615383,\n \"acc_stderr\": 0.022066054378726257,\n \"acc_norm\": 0.25384615384615383,\n \"acc_norm_stderr\": 0.022066054378726257\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.24814814814814815,\n \"acc_stderr\": 0.0263357394040558,\n \"acc_norm\": 0.24814814814814815,\n \"acc_norm_stderr\": 0.0263357394040558\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.24369747899159663,\n \"acc_stderr\": 0.02788682807838056,\n \"acc_norm\": 0.24369747899159663,\n \"acc_norm_stderr\": 0.02788682807838056\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.2582781456953642,\n \"acc_stderr\": 0.035737053147634576,\n \"acc_norm\": 0.2582781456953642,\n \"acc_norm_stderr\": 0.035737053147634576\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.24036697247706423,\n \"acc_stderr\": 0.01832060732096407,\n \"acc_norm\": 0.24036697247706423,\n \"acc_norm_stderr\": 0.01832060732096407\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.23148148148148148,\n \"acc_stderr\": 0.02876511171804694,\n \"acc_norm\": 0.23148148148148148,\n \"acc_norm_stderr\": 0.02876511171804694\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.27450980392156865,\n \"acc_stderr\": 0.03132179803083292,\n \"acc_norm\": 0.27450980392156865,\n \"acc_norm_stderr\": 0.03132179803083292\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.22784810126582278,\n \"acc_stderr\": 0.02730348459906942,\n \"acc_norm\": 0.22784810126582278,\n \"acc_norm_stderr\": 0.02730348459906942\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.36771300448430494,\n \"acc_stderr\": 0.03236198350928275,\n \"acc_norm\": 0.36771300448430494,\n \"acc_norm_stderr\": 0.03236198350928275\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.26717557251908397,\n \"acc_stderr\": 0.03880848301082396,\n \"acc_norm\": 0.26717557251908397,\n \"acc_norm_stderr\": 0.03880848301082396\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.24793388429752067,\n \"acc_stderr\": 0.03941897526516301,\n \"acc_norm\": 0.24793388429752067,\n \"acc_norm_stderr\": 0.03941897526516301\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.2962962962962963,\n \"acc_stderr\": 0.04414343666854933,\n \"acc_norm\": 0.2962962962962963,\n \"acc_norm_stderr\": 0.04414343666854933\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.25153374233128833,\n \"acc_stderr\": 0.034089978868575295,\n \"acc_norm\": 0.25153374233128833,\n \"acc_norm_stderr\": 0.034089978868575295\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.26785714285714285,\n \"acc_stderr\": 0.04203277291467764,\n \"acc_norm\": 0.26785714285714285,\n \"acc_norm_stderr\": 0.04203277291467764\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.2912621359223301,\n \"acc_stderr\": 0.04498676320572921,\n \"acc_norm\": 0.2912621359223301,\n \"acc_norm_stderr\": 0.04498676320572921\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.26495726495726496,\n \"acc_stderr\": 0.02891120880274947,\n \"acc_norm\": 0.26495726495726496,\n \"acc_norm_stderr\": 0.02891120880274947\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.04461960433384741,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.04461960433384741\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.2950191570881226,\n \"acc_stderr\": 0.01630836377293272,\n \"acc_norm\": 0.2950191570881226,\n \"acc_norm_stderr\": 0.01630836377293272\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.24855491329479767,\n \"acc_stderr\": 0.023267528432100174,\n \"acc_norm\": 0.24855491329479767,\n \"acc_norm_stderr\": 0.023267528432100174\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.24134078212290502,\n \"acc_stderr\": 0.014310999547961459,\n \"acc_norm\": 0.24134078212290502,\n \"acc_norm_stderr\": 0.014310999547961459\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.24183006535947713,\n \"acc_stderr\": 0.024518195641879334,\n \"acc_norm\": 0.24183006535947713,\n \"acc_norm_stderr\": 0.024518195641879334\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.2733118971061093,\n \"acc_stderr\": 0.02531176597542612,\n \"acc_norm\": 0.2733118971061093,\n \"acc_norm_stderr\": 0.02531176597542612\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.28703703703703703,\n \"acc_stderr\": 0.02517104191530968,\n \"acc_norm\": 0.28703703703703703,\n \"acc_norm_stderr\": 0.02517104191530968\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.28368794326241137,\n \"acc_stderr\": 0.02689170942834396,\n \"acc_norm\": 0.28368794326241137,\n \"acc_norm_stderr\": 0.02689170942834396\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.242503259452412,\n \"acc_stderr\": 0.010946570966348768,\n \"acc_norm\": 0.242503259452412,\n \"acc_norm_stderr\": 0.010946570966348768\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.20955882352941177,\n \"acc_stderr\": 0.02472311040767705,\n \"acc_norm\": 0.20955882352941177,\n \"acc_norm_stderr\": 0.02472311040767705\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.2647058823529412,\n \"acc_stderr\": 0.01784808957491323,\n \"acc_norm\": 0.2647058823529412,\n \"acc_norm_stderr\": 0.01784808957491323\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.36363636363636365,\n \"acc_stderr\": 0.04607582090719976,\n \"acc_norm\": 0.36363636363636365,\n \"acc_norm_stderr\": 0.04607582090719976\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.3183673469387755,\n \"acc_stderr\": 0.029822533793982062,\n \"acc_norm\": 0.3183673469387755,\n \"acc_norm_stderr\": 0.029822533793982062\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.25870646766169153,\n \"acc_stderr\": 0.030965903123573054,\n \"acc_norm\": 0.25870646766169153,\n \"acc_norm_stderr\": 0.030965903123573054\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542127,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542127\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.29518072289156627,\n \"acc_stderr\": 0.035509201856896294,\n \"acc_norm\": 0.29518072289156627,\n \"acc_norm_stderr\": 0.035509201856896294\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.3333333333333333,\n \"acc_stderr\": 0.03615507630310935,\n \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.03615507630310935\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.23255813953488372,\n \"mc1_stderr\": 0.014789157531080503,\n \"mc2\": 0.3719034123619623,\n \"mc2_stderr\": 0.014194966399461809\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.665351223362273,\n \"acc_stderr\": 0.013261823629558366\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.029567854435178165,\n \"acc_stderr\": 0.004665893134220805\n }\n}\n```", "repo_url": "https://huggingface.co/xaviviro/FLAMA-0.1-3B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|arc:challenge|25_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|gsm8k|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hellaswag|10_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-28T07-59-14.342619.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["**/details_harness|winogrande|5_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-28T07-59-14.342619.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_28T07_59_14.342619", "path": ["results_2023-12-28T07-59-14.342619.parquet"]}, {"split": "latest", "path": ["results_2023-12-28T07-59-14.342619.parquet"]}]}]}
2023-12-28T08:01:19+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of xaviviro/FLAMA-0.1-3B Dataset automatically created during the evaluation run of model xaviviro/FLAMA-0.1-3B on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-28T07:59:14.342619(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of xaviviro/FLAMA-0.1-3B\n\n\n\nDataset automatically created during the evaluation run of model xaviviro/FLAMA-0.1-3B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-28T07:59:14.342619(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of xaviviro/FLAMA-0.1-3B\n\n\n\nDataset automatically created during the evaluation run of model xaviviro/FLAMA-0.1-3B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-28T07:59:14.342619(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 181, 67, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of xaviviro/FLAMA-0.1-3B\n\n\n\nDataset automatically created during the evaluation run of model xaviviro/FLAMA-0.1-3B on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-28T07:59:14.342619(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
4f189fc1ea10878f8b673b62c43905a963700b8a
# Dataset Card for "finetune_mistral_7b_bs_prediction" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
johnchen1/finetune_mistral_7b_bs_prediction
[ "region:us" ]
2023-12-28T08:06:50+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3299961, "num_examples": 1478}], "download_size": 1270466, "dataset_size": 3299961}}
2023-12-28T08:06:56+00:00
[]
[]
TAGS #region-us
# Dataset Card for "finetune_mistral_7b_bs_prediction" More Information needed
[ "# Dataset Card for \"finetune_mistral_7b_bs_prediction\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"finetune_mistral_7b_bs_prediction\"\n\nMore Information needed" ]
[ 6, 24 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"finetune_mistral_7b_bs_prediction\"\n\nMore Information needed" ]
17f4c191afdf455569ab96f63ecae2d5d1a7181e
# Dataset Card for "TRAIN_mistral_7b_bs_prediction" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
johnchen1/TRAIN_mistral_7b_bs_prediction
[ "region:us" ]
2023-12-28T08:09:27+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3299961, "num_examples": 1478}], "download_size": 0, "dataset_size": 3299961}}
2023-12-28T08:12:11+00:00
[]
[]
TAGS #region-us
# Dataset Card for "TRAIN_mistral_7b_bs_prediction" More Information needed
[ "# Dataset Card for \"TRAIN_mistral_7b_bs_prediction\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"TRAIN_mistral_7b_bs_prediction\"\n\nMore Information needed" ]
[ 6, 24 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"TRAIN_mistral_7b_bs_prediction\"\n\nMore Information needed" ]
f78c5295902862747c902c28662226f023e06c38
<br> **🔥Update**: - [2023/01/06] We release the commercial-use version of MathPile, namely [MathPile_Commercial](https://huggingface.co/datasets/GAIR/MathPile_Commercial). - [2023/01/06] We release the new version (v0.2, cleaner version) of MathPile. It has been updated to the `main` branch (also the `v0.2` branch). The main updates are as follows: - fixed a problem with the display of mathematical formulas in the Wikipedia subset, which was caused by the HTML conversion to markdown; - fixed unclosed caption parentheses in the image environment in arXiv and macro command substitutions (as suggested in [issue 1](https://huggingface.co/datasets/GAIR/MathPile/discussions/1)), as well as improper line wrapping in paragraphs. - If you would like to download the original MathPile, you can download it by setting the `revision` parameter to `v0.1`. - [2023/12/29] Thanks for your interest in our dataset. We strongly recommend that you complete all the information on the form when applying to facilitate our review process. <br> # Dataset Card for Dataset Name <!-- Provide a quick summary of the dataset. --> We introduce MathPile a diverse and high-quality math-centric corpus comprising about 9.5 billion tokens. our work is significantly different from the previous work in the following characteristics: <div align="center"> <img src="./imgs/mathpile-features.png" width=45%/> </div> - **Math-centric**: MathPile uniquely caters to the math domain, unlike general domain-focused corpora like Pile and RedPajama, or multilingual-focused ones like ROOTS and The Stack. While there are math-centric corpora, they're often either closed-sourced, like Google's Minerva and OpenAI's MathMix, or lack diversity, such as ProofPile and OpenWebMath. - **Diversity**: MathPile draws from a wide range of sources: **Textbooks** (including lecture notes), **arXiv**, **Wikipedia**, **ProofWiki**, **StackExchange**, and **Web Pages**. It encompasses mathematical content suitable for K-12, college, postgraduate levels, and math competitions. **This diversity is a first, especially with our release of a significant collection of high-quality textbooks (~0.19B tokens).** - **High-Quality**: We adhered to the principle of *less is more*, firmly believing in the supremacy of data quality over quantity, even in the pre-training phase. Our meticulous data collection and processing efforts included a complex suite of preprocessing, prefiltering, cleaning, filtering, and deduplication, ensuring the high quality of our corpus. - **Data Documentation**: To enhance transparency, we've extensively documented MathPile. This includes a **dataset sheet** (see Table 5 in our paper) and **quality annotations** for web-sourced documents, like language identification scores and symbol-to-word ratios. This gives users flexibility to tailor the data to their needs. We've also performed **data contamination detection** to eliminate duplicates from benchmark test sets like MATH and MMLU-STEM. <div align="center"> <img src="./imgs/mathpile-overview.png" width=70%/> </div> ## Dataset Details Refer to Appendix A in [our paper](https://huggingface.co/papers/2312.17120) for the MathPile Dataset Sheet. ### How to download MathPile? Currently, we recommend that you download it locally from the command line (such as `huggingface-cli`) instead of the python function `load_dataset("GAIR/MathPile")` (due to a possible network issue), unpack the gz file, and then load the jsonl file. Some commands that might be helpful are as follows ``` $ huggingface-cli download --resume-download --repo-type dataset GAIR/MathPile --local-dir /your/path/ --local-dir-use-symlinks False $ cd /your/path/ $ find . -type f -name "*.gz" -exec gzip -d {} \; ``` Later we will also support the datasets loading via `load_dataset("GAIR/MathPile")`. Stay tuned. ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** GAIR Lab, SJTU - **Funded by [optional]:** GAIR Lab, SJTU - **Language(s) (NLP):** English - **License:** CC BY-NC-SA 4.0 ### Dataset Sources <!-- Provide the basic links for the dataset. --> - **Repository:** https://github.com/GAIR-NLP/MathPile - **Paper [optional]:** https://huggingface.co/papers/2312.17120 - **Demo [optional]:** https://gair-nlp.github.io/MathPile/ ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use To develop mathematical language models. <!-- This section describes suitable use cases for the dataset. --> ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> This dataset may be not suitable for scenarios unrelated to mathematics or reasoning. ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> ``` { "text": ..., "SubSet": "CommomCrawl" | "StackExchange" | "Textbooks" | "Wikipedia" | "ProofWiki" | "arXiv" "meta": {"language_detection_score": , "idx": , "contain_at_least_two_stop_words": , } ``` ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> To create a diverse and high-quality math-centric corpus, thereby enhancing the mathematical reasoning abilities of language models. ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> We sourced data from Textbooks, lecture notes, arXiv, Wikipedia, ProofWiki, StackExchange, and Common Crawl. Throughout the MathPile development, we meticulously source and gather data, applying a rigorous and math-specific pipeline. This pipeline encompasses various stages such as preprocessing, prefiltering, language identification, cleaning and filtering, and deduplication, all aimed at maintaining the high quality of the corpus. Please see [our paper](https://arxiv.org/abs/2312.17120) for more details. ### Annotations <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> We provided *quantity annotations* (such as language identification scores and the ratio of symbols to words) for documents from Web pages (i.e., Common Crawl and Wikipedia). These annotations offer future researchers and developers the flexibility to filter the data according to their criteria, tailoring it to their specific needs. #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> The corpus may potentially contain academic emails and the author's name, as seen in papers from sources like arXiv. However, we view this as justifiable and within acceptable bounds. ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> - The decisions made during the data collection and processing phases might not always be optimal. - Some documents in MathPile may not always be of the highest quality. We are committed to continually refining and optimizing this corpus. ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. ## Citation <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> If you find our work useful or use MathPile, please cite our paper: ``` @article{wang2023mathpile, title={Generative AI for Math: Part I -- MathPile: A Billion-Token-Scale Pretraining Corpus for Math}, author={Wang, Zengzhi and Xia, Rui and Liu, Pengfei}, journal={arXiv preprint arXiv:2312.17120}, year={2023} } ``` ## Dataset Card Authors [Zengzhi Wang](https://scholar.google.com/citations?user=qLS4f-8AAAAJ&hl=en) ## Dataset Card Contact [email protected], [email protected]
GAIR/MathPile
[ "size_categories:1B<n<10B", "language:en", "license:cc-by-nc-sa-4.0", "arxiv:2312.17120", "region:us" ]
2023-12-28T08:17:59+00:00
{"language": ["en"], "license": "cc-by-nc-sa-4.0", "size_categories": ["1B<n<10B"], "extra_gated_prompt": "By using this data, you agree to comply with the original usage licenses of all sources contributing to MathPile. If the source data of this dataset is subject to a more restrictive license than CC BY-NC-SA 4.0, then this dataset conforms to that more stringent licensing. In all other scenarios, it is governed by the CC BY-NC-SA 4.0 license. Access to this dataset is granted automatically once you accept the license terms and complete all the required fields below.", "extra_gated_fields": {"Your Full Name": "text", "Organization or Entity you are affiliated with": "text", "Country or state you are located in": "text", "Your email": "text", "What is your intended use(s) for this dataset": "text", "You AGREE to use this dataset for non-commercial use ONLY": "checkbox", "You AGREE to comply with the original usage licenses of all sources contributing to this dataset and the license of this dataset": "checkbox", "You AGREE to cite our paper if you use this dataset": "checkbox", "You ENSURE that the information you have provided is true and accurate": "checkbox"}}
2024-02-13T03:38:51+00:00
[ "2312.17120" ]
[ "en" ]
TAGS #size_categories-1B<n<10B #language-English #license-cc-by-nc-sa-4.0 #arxiv-2312.17120 #region-us
<br> Update: - [2023/01/06] We release the commercial-use version of MathPile, namely MathPile_Commercial. - [2023/01/06] We release the new version (v0.2, cleaner version) of MathPile. It has been updated to the 'main' branch (also the 'v0.2' branch). The main updates are as follows: - fixed a problem with the display of mathematical formulas in the Wikipedia subset, which was caused by the HTML conversion to markdown; - fixed unclosed caption parentheses in the image environment in arXiv and macro command substitutions (as suggested in issue 1), as well as improper line wrapping in paragraphs. - If you would like to download the original MathPile, you can download it by setting the 'revision' parameter to 'v0.1'. - [2023/12/29] Thanks for your interest in our dataset. We strongly recommend that you complete all the information on the form when applying to facilitate our review process. <br> # Dataset Card for Dataset Name We introduce MathPile a diverse and high-quality math-centric corpus comprising about 9.5 billion tokens. our work is significantly different from the previous work in the following characteristics: <div align="center"> <img src="./imgs/URL" width=45%/> </div> - Math-centric: MathPile uniquely caters to the math domain, unlike general domain-focused corpora like Pile and RedPajama, or multilingual-focused ones like ROOTS and The Stack. While there are math-centric corpora, they're often either closed-sourced, like Google's Minerva and OpenAI's MathMix, or lack diversity, such as ProofPile and OpenWebMath. - Diversity: MathPile draws from a wide range of sources: Textbooks (including lecture notes), arXiv, Wikipedia, ProofWiki, StackExchange, and Web Pages. It encompasses mathematical content suitable for K-12, college, postgraduate levels, and math competitions. This diversity is a first, especially with our release of a significant collection of high-quality textbooks (~0.19B tokens). - High-Quality: We adhered to the principle of *less is more*, firmly believing in the supremacy of data quality over quantity, even in the pre-training phase. Our meticulous data collection and processing efforts included a complex suite of preprocessing, prefiltering, cleaning, filtering, and deduplication, ensuring the high quality of our corpus. - Data Documentation: To enhance transparency, we've extensively documented MathPile. This includes a dataset sheet (see Table 5 in our paper) and quality annotations for web-sourced documents, like language identification scores and symbol-to-word ratios. This gives users flexibility to tailor the data to their needs. We've also performed data contamination detection to eliminate duplicates from benchmark test sets like MATH and MMLU-STEM. <div align="center"> <img src="./imgs/URL" width=70%/> </div> ## Dataset Details Refer to Appendix A in our paper for the MathPile Dataset Sheet. ### How to download MathPile? Currently, we recommend that you download it locally from the command line (such as 'huggingface-cli') instead of the python function 'load_dataset("GAIR/MathPile")' (due to a possible network issue), unpack the gz file, and then load the jsonl file. Some commands that might be helpful are as follows Later we will also support the datasets loading via 'load_dataset("GAIR/MathPile")'. Stay tuned. ### Dataset Description - Curated by: GAIR Lab, SJTU - Funded by [optional]: GAIR Lab, SJTU - Language(s) (NLP): English - License: CC BY-NC-SA 4.0 ### Dataset Sources - Repository: URL - Paper [optional]: URL - Demo [optional]: URL ## Uses ### Direct Use To develop mathematical language models. ### Out-of-Scope Use This dataset may be not suitable for scenarios unrelated to mathematics or reasoning. ## Dataset Structure ## Dataset Creation ### Curation Rationale To create a diverse and high-quality math-centric corpus, thereby enhancing the mathematical reasoning abilities of language models. ### Source Data #### Data Collection and Processing We sourced data from Textbooks, lecture notes, arXiv, Wikipedia, ProofWiki, StackExchange, and Common Crawl. Throughout the MathPile development, we meticulously source and gather data, applying a rigorous and math-specific pipeline. This pipeline encompasses various stages such as preprocessing, prefiltering, language identification, cleaning and filtering, and deduplication, all aimed at maintaining the high quality of the corpus. Please see our paper for more details. ### Annotations We provided *quantity annotations* (such as language identification scores and the ratio of symbols to words) for documents from Web pages (i.e., Common Crawl and Wikipedia). These annotations offer future researchers and developers the flexibility to filter the data according to their criteria, tailoring it to their specific needs. #### Personal and Sensitive Information The corpus may potentially contain academic emails and the author's name, as seen in papers from sources like arXiv. However, we view this as justifiable and within acceptable bounds. ## Bias, Risks, and Limitations - The decisions made during the data collection and processing phases might not always be optimal. - Some documents in MathPile may not always be of the highest quality. We are committed to continually refining and optimizing this corpus. ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. If you find our work useful or use MathPile, please cite our paper: ## Dataset Card Authors Zengzhi Wang ## Dataset Card Contact stefanpengfei@URL, URL@URL
[ "# Dataset Card for Dataset Name\n\n\n\n\n\n\nWe introduce MathPile a diverse and high-quality math-centric corpus comprising about 9.5 billion tokens. our work is significantly different from the previous work in the following characteristics:\n\n\n\n<div align=\"center\">\n<img src=\"./imgs/URL\" width=45%/>\n</div>\n\n\n\n- Math-centric: MathPile uniquely caters to the math domain, unlike general domain-focused corpora like Pile and RedPajama, or multilingual-focused ones like ROOTS and The Stack. While there are math-centric corpora, they're often either closed-sourced, like Google's Minerva and OpenAI's MathMix, or lack diversity, such as ProofPile and OpenWebMath.\n\n- Diversity: MathPile draws from a wide range of sources: Textbooks (including lecture notes), arXiv, Wikipedia, ProofWiki, StackExchange, and Web Pages. It encompasses mathematical content suitable for K-12, college, postgraduate levels, and math competitions. This diversity is a first, especially with our release of a significant collection of high-quality textbooks (~0.19B tokens).\n\n- High-Quality: We adhered to the principle of *less is more*, firmly believing in the supremacy of data quality over quantity, even in the pre-training phase. Our meticulous data collection and processing efforts included a complex suite of preprocessing, prefiltering, cleaning, filtering, and deduplication, ensuring the high quality of our corpus.\n\n- Data Documentation: To enhance transparency, we've extensively documented MathPile. This includes a dataset sheet (see Table 5 in our paper) and quality annotations for web-sourced documents, like language identification scores and symbol-to-word ratios. This gives users flexibility to tailor the data to their needs. We've also performed data contamination detection to eliminate duplicates from benchmark test sets like MATH and MMLU-STEM.\n\n\n\n<div align=\"center\">\n<img src=\"./imgs/URL\" width=70%/>\n</div>", "## Dataset Details\n\nRefer to Appendix A in our paper for the MathPile Dataset Sheet.", "### How to download MathPile?\n\nCurrently, we recommend that you download it locally from the command line (such as 'huggingface-cli') instead of the python function 'load_dataset(\"GAIR/MathPile\")' (due to a possible network issue), unpack the gz file, and then load the jsonl file. Some commands that might be helpful are as follows\n\n\n\nLater we will also support the datasets loading via 'load_dataset(\"GAIR/MathPile\")'. Stay tuned.", "### Dataset Description\n\n\n\n\n\n- Curated by: GAIR Lab, SJTU\n- Funded by [optional]: GAIR Lab, SJTU\n- Language(s) (NLP): English\n- License: CC BY-NC-SA 4.0", "### Dataset Sources\n\n\n\n- Repository: URL\n- Paper [optional]: URL\n- Demo [optional]: URL", "## Uses", "### Direct Use\n\nTo develop mathematical language models.", "### Out-of-Scope Use\n\n\n\nThis dataset may be not suitable for scenarios unrelated to mathematics or reasoning.", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale\n\n\n\nTo create a diverse and high-quality math-centric corpus, thereby enhancing the mathematical reasoning abilities of language models.", "### Source Data", "#### Data Collection and Processing\n\n\n\nWe sourced data from Textbooks, lecture notes, arXiv, Wikipedia, ProofWiki, StackExchange, and Common Crawl. Throughout the MathPile development, we meticulously source and\ngather data, applying a rigorous and math-specific pipeline. This pipeline encompasses various stages such as preprocessing, prefiltering, language identification, cleaning and filtering, and deduplication,\nall aimed at maintaining the high quality of the corpus. Please see our paper for more details.", "### Annotations \n\n\n\nWe provided *quantity annotations* (such as language identification scores and the ratio of symbols to words) for documents from Web pages (i.e., Common Crawl and Wikipedia). These annotations offer future researchers and developers\nthe flexibility to filter the data according to their criteria, tailoring it to their specific needs.", "#### Personal and Sensitive Information\n\n\n\nThe corpus may potentially contain academic emails and the author's name, as seen in papers from sources like arXiv. However, we view this as justifiable and within acceptable bounds.", "## Bias, Risks, and Limitations\n\n\n\n\n- The decisions made during the data collection and processing phases might not always be optimal.\n- Some documents in MathPile may not always be of the highest quality. We are committed to continually refining and optimizing this corpus.", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset.\n\nIf you find our work useful or use MathPile, please cite our paper:", "## Dataset Card Authors\n\nZengzhi Wang", "## Dataset Card Contact\n\n\nstefanpengfei@URL, URL@URL" ]
[ "TAGS\n#size_categories-1B<n<10B #language-English #license-cc-by-nc-sa-4.0 #arxiv-2312.17120 #region-us \n", "# Dataset Card for Dataset Name\n\n\n\n\n\n\nWe introduce MathPile a diverse and high-quality math-centric corpus comprising about 9.5 billion tokens. our work is significantly different from the previous work in the following characteristics:\n\n\n\n<div align=\"center\">\n<img src=\"./imgs/URL\" width=45%/>\n</div>\n\n\n\n- Math-centric: MathPile uniquely caters to the math domain, unlike general domain-focused corpora like Pile and RedPajama, or multilingual-focused ones like ROOTS and The Stack. While there are math-centric corpora, they're often either closed-sourced, like Google's Minerva and OpenAI's MathMix, or lack diversity, such as ProofPile and OpenWebMath.\n\n- Diversity: MathPile draws from a wide range of sources: Textbooks (including lecture notes), arXiv, Wikipedia, ProofWiki, StackExchange, and Web Pages. It encompasses mathematical content suitable for K-12, college, postgraduate levels, and math competitions. This diversity is a first, especially with our release of a significant collection of high-quality textbooks (~0.19B tokens).\n\n- High-Quality: We adhered to the principle of *less is more*, firmly believing in the supremacy of data quality over quantity, even in the pre-training phase. Our meticulous data collection and processing efforts included a complex suite of preprocessing, prefiltering, cleaning, filtering, and deduplication, ensuring the high quality of our corpus.\n\n- Data Documentation: To enhance transparency, we've extensively documented MathPile. This includes a dataset sheet (see Table 5 in our paper) and quality annotations for web-sourced documents, like language identification scores and symbol-to-word ratios. This gives users flexibility to tailor the data to their needs. We've also performed data contamination detection to eliminate duplicates from benchmark test sets like MATH and MMLU-STEM.\n\n\n\n<div align=\"center\">\n<img src=\"./imgs/URL\" width=70%/>\n</div>", "## Dataset Details\n\nRefer to Appendix A in our paper for the MathPile Dataset Sheet.", "### How to download MathPile?\n\nCurrently, we recommend that you download it locally from the command line (such as 'huggingface-cli') instead of the python function 'load_dataset(\"GAIR/MathPile\")' (due to a possible network issue), unpack the gz file, and then load the jsonl file. Some commands that might be helpful are as follows\n\n\n\nLater we will also support the datasets loading via 'load_dataset(\"GAIR/MathPile\")'. Stay tuned.", "### Dataset Description\n\n\n\n\n\n- Curated by: GAIR Lab, SJTU\n- Funded by [optional]: GAIR Lab, SJTU\n- Language(s) (NLP): English\n- License: CC BY-NC-SA 4.0", "### Dataset Sources\n\n\n\n- Repository: URL\n- Paper [optional]: URL\n- Demo [optional]: URL", "## Uses", "### Direct Use\n\nTo develop mathematical language models.", "### Out-of-Scope Use\n\n\n\nThis dataset may be not suitable for scenarios unrelated to mathematics or reasoning.", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale\n\n\n\nTo create a diverse and high-quality math-centric corpus, thereby enhancing the mathematical reasoning abilities of language models.", "### Source Data", "#### Data Collection and Processing\n\n\n\nWe sourced data from Textbooks, lecture notes, arXiv, Wikipedia, ProofWiki, StackExchange, and Common Crawl. Throughout the MathPile development, we meticulously source and\ngather data, applying a rigorous and math-specific pipeline. This pipeline encompasses various stages such as preprocessing, prefiltering, language identification, cleaning and filtering, and deduplication,\nall aimed at maintaining the high quality of the corpus. Please see our paper for more details.", "### Annotations \n\n\n\nWe provided *quantity annotations* (such as language identification scores and the ratio of symbols to words) for documents from Web pages (i.e., Common Crawl and Wikipedia). These annotations offer future researchers and developers\nthe flexibility to filter the data according to their criteria, tailoring it to their specific needs.", "#### Personal and Sensitive Information\n\n\n\nThe corpus may potentially contain academic emails and the author's name, as seen in papers from sources like arXiv. However, we view this as justifiable and within acceptable bounds.", "## Bias, Risks, and Limitations\n\n\n\n\n- The decisions made during the data collection and processing phases might not always be optimal.\n- Some documents in MathPile may not always be of the highest quality. We are committed to continually refining and optimizing this corpus.", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset.\n\nIf you find our work useful or use MathPile, please cite our paper:", "## Dataset Card Authors\n\nZengzhi Wang", "## Dataset Card Contact\n\n\nstefanpengfei@URL, URL@URL" ]
[ 44, 507, 23, 126, 53, 28, 3, 12, 29, 6, 5, 38, 4, 123, 82, 51, 59, 45, 10, 15 ]
[ "passage: TAGS\n#size_categories-1B<n<10B #language-English #license-cc-by-nc-sa-4.0 #arxiv-2312.17120 #region-us \n", "passage: # Dataset Card for Dataset Name\n\n\n\n\n\n\nWe introduce MathPile a diverse and high-quality math-centric corpus comprising about 9.5 billion tokens. our work is significantly different from the previous work in the following characteristics:\n\n\n\n<div align=\"center\">\n<img src=\"./imgs/URL\" width=45%/>\n</div>\n\n\n\n- Math-centric: MathPile uniquely caters to the math domain, unlike general domain-focused corpora like Pile and RedPajama, or multilingual-focused ones like ROOTS and The Stack. While there are math-centric corpora, they're often either closed-sourced, like Google's Minerva and OpenAI's MathMix, or lack diversity, such as ProofPile and OpenWebMath.\n\n- Diversity: MathPile draws from a wide range of sources: Textbooks (including lecture notes), arXiv, Wikipedia, ProofWiki, StackExchange, and Web Pages. It encompasses mathematical content suitable for K-12, college, postgraduate levels, and math competitions. This diversity is a first, especially with our release of a significant collection of high-quality textbooks (~0.19B tokens).\n\n- High-Quality: We adhered to the principle of *less is more*, firmly believing in the supremacy of data quality over quantity, even in the pre-training phase. Our meticulous data collection and processing efforts included a complex suite of preprocessing, prefiltering, cleaning, filtering, and deduplication, ensuring the high quality of our corpus.\n\n- Data Documentation: To enhance transparency, we've extensively documented MathPile. This includes a dataset sheet (see Table 5 in our paper) and quality annotations for web-sourced documents, like language identification scores and symbol-to-word ratios. This gives users flexibility to tailor the data to their needs. We've also performed data contamination detection to eliminate duplicates from benchmark test sets like MATH and MMLU-STEM.\n\n\n\n<div align=\"center\">\n<img src=\"./imgs/URL\" width=70%/>\n</div>## Dataset Details\n\nRefer to Appendix A in our paper for the MathPile Dataset Sheet.### How to download MathPile?\n\nCurrently, we recommend that you download it locally from the command line (such as 'huggingface-cli') instead of the python function 'load_dataset(\"GAIR/MathPile\")' (due to a possible network issue), unpack the gz file, and then load the jsonl file. Some commands that might be helpful are as follows\n\n\n\nLater we will also support the datasets loading via 'load_dataset(\"GAIR/MathPile\")'. Stay tuned.### Dataset Description\n\n\n\n\n\n- Curated by: GAIR Lab, SJTU\n- Funded by [optional]: GAIR Lab, SJTU\n- Language(s) (NLP): English\n- License: CC BY-NC-SA 4.0### Dataset Sources\n\n\n\n- Repository: URL\n- Paper [optional]: URL\n- Demo [optional]: URL## Uses### Direct Use\n\nTo develop mathematical language models.### Out-of-Scope Use\n\n\n\nThis dataset may be not suitable for scenarios unrelated to mathematics or reasoning.## Dataset Structure## Dataset Creation### Curation Rationale\n\n\n\nTo create a diverse and high-quality math-centric corpus, thereby enhancing the mathematical reasoning abilities of language models.### Source Data#### Data Collection and Processing\n\n\n\nWe sourced data from Textbooks, lecture notes, arXiv, Wikipedia, ProofWiki, StackExchange, and Common Crawl. Throughout the MathPile development, we meticulously source and\ngather data, applying a rigorous and math-specific pipeline. This pipeline encompasses various stages such as preprocessing, prefiltering, language identification, cleaning and filtering, and deduplication,\nall aimed at maintaining the high quality of the corpus. Please see our paper for more details." ]
80083c8c36e17b1acf34a7ff48db64eedced5ff4
# Dataset Card for "ldjnr_capybara_binarized" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jan-hq/ldjnr_capybara_binarized
[ "region:us" ]
2023-12-28T08:28:01+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 82845632.0729676, "num_examples": 18587}, {"name": "test", "num_bytes": 9208536.927032392, "num_examples": 2066}], "download_size": 47876238, "dataset_size": 92054169.0}}
2023-12-28T08:28:14+00:00
[]
[]
TAGS #region-us
# Dataset Card for "ldjnr_capybara_binarized" More Information needed
[ "# Dataset Card for \"ldjnr_capybara_binarized\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"ldjnr_capybara_binarized\"\n\nMore Information needed" ]
[ 6, 21 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"ldjnr_capybara_binarized\"\n\nMore Information needed" ]
43e5e501929ca311d1b92b88b16d2425a4b3ffb7
# Dataset Card for "another_dummy_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
diamond0/another_dummy_dataset
[ "region:us" ]
2023-12-28T09:17:08+00:00
{"dataset_info": {"features": [{"name": "html_url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "comments", "dtype": "string"}, {"name": "body", "dtype": "string"}, {"name": "comment_length", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "embeddings", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 18124069, "num_examples": 2175}], "download_size": 10047762, "dataset_size": 18124069}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
2023-12-28T09:17:25+00:00
[]
[]
TAGS #region-us
# Dataset Card for "another_dummy_dataset" More Information needed
[ "# Dataset Card for \"another_dummy_dataset\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"another_dummy_dataset\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"another_dummy_dataset\"\n\nMore Information needed" ]
4b84286eb893fb66217a23f4b381dc949d3f1648
# Dataset Card for "p2d_raw" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tyzhu/p2d_raw
[ "region:us" ]
2023-12-28T09:24:36+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 251092, "num_examples": 900}, {"name": "test", "num_bytes": 80304, "num_examples": 300}, {"name": "validation", "num_bytes": 80304, "num_examples": 300}], "download_size": 76658, "dataset_size": 411700}}
2023-12-28T09:24:43+00:00
[]
[]
TAGS #region-us
# Dataset Card for "p2d_raw" More Information needed
[ "# Dataset Card for \"p2d_raw\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"p2d_raw\"\n\nMore Information needed" ]
[ 6, 15 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"p2d_raw\"\n\nMore Information needed" ]
aaa2d4c9a7a13bed414308f0cae5b039cf1278f8
# Dataset Card for Evaluation run of xDAN-AI/xDAN-L1-Chat-RL-v1 <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [xDAN-AI/xDAN-L1-Chat-RL-v1](https://huggingface.co/xDAN-AI/xDAN-L1-Chat-RL-v1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_xDAN-AI__xDAN-L1-Chat-RL-v1", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-28T09:30:28.410673](https://huggingface.co/datasets/open-llm-leaderboard/details_xDAN-AI__xDAN-L1-Chat-RL-v1/blob/main/results_2023-12-28T09-30-28.410673.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6344608006071959, "acc_stderr": 0.03246479153994776, "acc_norm": 0.6364847695262345, "acc_norm_stderr": 0.03311425564391542, "mc1": 0.4039167686658507, "mc1_stderr": 0.017177276822584284, "mc2": 0.5669974462779248, "mc2_stderr": 0.015574572505669746 }, "harness|arc:challenge|25": { "acc": 0.6143344709897611, "acc_stderr": 0.01422425097325718, "acc_norm": 0.6629692832764505, "acc_norm_stderr": 0.013813476652902272 }, "harness|hellaswag|10": { "acc": 0.6733718382792272, "acc_stderr": 0.004680215003395924, "acc_norm": 0.8580959968133838, "acc_norm_stderr": 0.003482384956632785 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.37, "acc_stderr": 0.04852365870939099, "acc_norm": 0.37, "acc_norm_stderr": 0.04852365870939099 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.6, "acc_stderr": 0.04232073695151589, "acc_norm": 0.6, "acc_norm_stderr": 0.04232073695151589 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6842105263157895, "acc_stderr": 0.0378272898086547, "acc_norm": 0.6842105263157895, "acc_norm_stderr": 0.0378272898086547 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.62, "acc_stderr": 0.048783173121456316, "acc_norm": 0.62, "acc_norm_stderr": 0.048783173121456316 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6754716981132075, "acc_stderr": 0.02881561571343211, "acc_norm": 0.6754716981132075, "acc_norm_stderr": 0.02881561571343211 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7013888888888888, "acc_stderr": 0.03827052357950756, "acc_norm": 0.7013888888888888, "acc_norm_stderr": 0.03827052357950756 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.46, "acc_stderr": 0.05009082659620332, "acc_norm": 0.46, "acc_norm_stderr": 0.05009082659620332 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.51, "acc_stderr": 0.05024183937956912, "acc_norm": 0.51, "acc_norm_stderr": 0.05024183937956912 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.33, "acc_stderr": 0.047258156262526045, "acc_norm": 0.33, "acc_norm_stderr": 0.047258156262526045 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.6358381502890174, "acc_stderr": 0.03669072477416906, "acc_norm": 0.6358381502890174, "acc_norm_stderr": 0.03669072477416906 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.4215686274509804, "acc_stderr": 0.04913595201274498, "acc_norm": 0.4215686274509804, "acc_norm_stderr": 0.04913595201274498 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.76, "acc_stderr": 0.042923469599092816, "acc_norm": 0.76, "acc_norm_stderr": 0.042923469599092816 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5574468085106383, "acc_stderr": 0.032469569197899575, "acc_norm": 0.5574468085106383, "acc_norm_stderr": 0.032469569197899575 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.4298245614035088, "acc_stderr": 0.04657047260594963, "acc_norm": 0.4298245614035088, "acc_norm_stderr": 0.04657047260594963 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5655172413793104, "acc_stderr": 0.04130740879555497, "acc_norm": 0.5655172413793104, "acc_norm_stderr": 0.04130740879555497 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.3994708994708995, "acc_stderr": 0.02522545028406788, "acc_norm": 0.3994708994708995, "acc_norm_stderr": 0.02522545028406788 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.40476190476190477, "acc_stderr": 0.04390259265377562, "acc_norm": 0.40476190476190477, "acc_norm_stderr": 0.04390259265377562 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.32, "acc_stderr": 0.04688261722621504, "acc_norm": 0.32, "acc_norm_stderr": 0.04688261722621504 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7419354838709677, "acc_stderr": 0.02489246917246283, "acc_norm": 0.7419354838709677, "acc_norm_stderr": 0.02489246917246283 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5172413793103449, "acc_stderr": 0.035158955511656986, "acc_norm": 0.5172413793103449, "acc_norm_stderr": 0.035158955511656986 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.67, "acc_stderr": 0.04725815626252609, "acc_norm": 0.67, "acc_norm_stderr": 0.04725815626252609 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7636363636363637, "acc_stderr": 0.033175059300091805, "acc_norm": 0.7636363636363637, "acc_norm_stderr": 0.033175059300091805 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7626262626262627, "acc_stderr": 0.0303137105381989, "acc_norm": 0.7626262626262627, "acc_norm_stderr": 0.0303137105381989 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8860103626943006, "acc_stderr": 0.022935144053919443, "acc_norm": 0.8860103626943006, "acc_norm_stderr": 0.022935144053919443 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6410256410256411, "acc_stderr": 0.024321738484602354, "acc_norm": 0.6410256410256411, "acc_norm_stderr": 0.024321738484602354 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.34074074074074073, "acc_stderr": 0.028897748741131143, "acc_norm": 0.34074074074074073, "acc_norm_stderr": 0.028897748741131143 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6554621848739496, "acc_stderr": 0.030868682604121626, "acc_norm": 0.6554621848739496, "acc_norm_stderr": 0.030868682604121626 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.3509933774834437, "acc_stderr": 0.03896981964257375, "acc_norm": 0.3509933774834437, "acc_norm_stderr": 0.03896981964257375 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8238532110091743, "acc_stderr": 0.01633288239343138, "acc_norm": 0.8238532110091743, "acc_norm_stderr": 0.01633288239343138 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.4861111111111111, "acc_stderr": 0.03408655867977748, "acc_norm": 0.4861111111111111, "acc_norm_stderr": 0.03408655867977748 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.803921568627451, "acc_stderr": 0.027865942286639318, "acc_norm": 0.803921568627451, "acc_norm_stderr": 0.027865942286639318 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.8227848101265823, "acc_stderr": 0.02485636418450322, "acc_norm": 0.8227848101265823, "acc_norm_stderr": 0.02485636418450322 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6905829596412556, "acc_stderr": 0.03102441174057222, "acc_norm": 0.6905829596412556, "acc_norm_stderr": 0.03102441174057222 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7175572519083969, "acc_stderr": 0.03948406125768361, "acc_norm": 0.7175572519083969, "acc_norm_stderr": 0.03948406125768361 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7768595041322314, "acc_stderr": 0.03800754475228733, "acc_norm": 0.7768595041322314, "acc_norm_stderr": 0.03800754475228733 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.7592592592592593, "acc_stderr": 0.041331194402438376, "acc_norm": 0.7592592592592593, "acc_norm_stderr": 0.041331194402438376 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.803680981595092, "acc_stderr": 0.031207970394709218, "acc_norm": 0.803680981595092, "acc_norm_stderr": 0.031207970394709218 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.5267857142857143, "acc_stderr": 0.047389751192741546, "acc_norm": 0.5267857142857143, "acc_norm_stderr": 0.047389751192741546 }, "harness|hendrycksTest-management|5": { "acc": 0.7864077669902912, "acc_stderr": 0.040580420156460344, "acc_norm": 0.7864077669902912, "acc_norm_stderr": 0.040580420156460344 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8589743589743589, "acc_stderr": 0.022801382534597528, "acc_norm": 0.8589743589743589, "acc_norm_stderr": 0.022801382534597528 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.71, "acc_stderr": 0.045604802157206845, "acc_norm": 0.71, "acc_norm_stderr": 0.045604802157206845 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.8058748403575989, "acc_stderr": 0.014143970276657569, "acc_norm": 0.8058748403575989, "acc_norm_stderr": 0.014143970276657569 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.7196531791907514, "acc_stderr": 0.024182427496577612, "acc_norm": 0.7196531791907514, "acc_norm_stderr": 0.024182427496577612 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.3575418994413408, "acc_stderr": 0.016029394474894886, "acc_norm": 0.3575418994413408, "acc_norm_stderr": 0.016029394474894886 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7352941176470589, "acc_stderr": 0.02526169121972948, "acc_norm": 0.7352941176470589, "acc_norm_stderr": 0.02526169121972948 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.707395498392283, "acc_stderr": 0.02583989833487798, "acc_norm": 0.707395498392283, "acc_norm_stderr": 0.02583989833487798 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7160493827160493, "acc_stderr": 0.02508947852376513, "acc_norm": 0.7160493827160493, "acc_norm_stderr": 0.02508947852376513 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.44680851063829785, "acc_stderr": 0.029658235097666904, "acc_norm": 0.44680851063829785, "acc_norm_stderr": 0.029658235097666904 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4680573663624511, "acc_stderr": 0.012744149704869647, "acc_norm": 0.4680573663624511, "acc_norm_stderr": 0.012744149704869647 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6470588235294118, "acc_stderr": 0.0290294228156814, "acc_norm": 0.6470588235294118, "acc_norm_stderr": 0.0290294228156814 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6584967320261438, "acc_stderr": 0.019184639328092487, "acc_norm": 0.6584967320261438, "acc_norm_stderr": 0.019184639328092487 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6545454545454545, "acc_stderr": 0.04554619617541054, "acc_norm": 0.6545454545454545, "acc_norm_stderr": 0.04554619617541054 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.710204081632653, "acc_stderr": 0.029043088683304328, "acc_norm": 0.710204081632653, "acc_norm_stderr": 0.029043088683304328 }, "harness|hendrycksTest-sociology|5": { "acc": 0.835820895522388, "acc_stderr": 0.026193923544454132, "acc_norm": 0.835820895522388, "acc_norm_stderr": 0.026193923544454132 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.89, "acc_stderr": 0.03144660377352202, "acc_norm": 0.89, "acc_norm_stderr": 0.03144660377352202 }, "harness|hendrycksTest-virology|5": { "acc": 0.5180722891566265, "acc_stderr": 0.03889951252827216, "acc_norm": 0.5180722891566265, "acc_norm_stderr": 0.03889951252827216 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8187134502923976, "acc_stderr": 0.029547741687640038, "acc_norm": 0.8187134502923976, "acc_norm_stderr": 0.029547741687640038 }, "harness|truthfulqa:mc|0": { "mc1": 0.4039167686658507, "mc1_stderr": 0.017177276822584284, "mc2": 0.5669974462779248, "mc2_stderr": 0.015574572505669746 }, "harness|winogrande|5": { "acc": 0.7884767166535123, "acc_stderr": 0.011477747684223194 }, "harness|gsm8k|5": { "acc": 0.5943896891584534, "acc_stderr": 0.013524848894462113 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_xDAN-AI__xDAN-L1-Chat-RL-v1
[ "region:us" ]
2023-12-28T09:32:43+00:00
{"pretty_name": "Evaluation run of xDAN-AI/xDAN-L1-Chat-RL-v1", "dataset_summary": "Dataset automatically created during the evaluation run of model [xDAN-AI/xDAN-L1-Chat-RL-v1](https://huggingface.co/xDAN-AI/xDAN-L1-Chat-RL-v1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_xDAN-AI__xDAN-L1-Chat-RL-v1\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-28T09:30:28.410673](https://huggingface.co/datasets/open-llm-leaderboard/details_xDAN-AI__xDAN-L1-Chat-RL-v1/blob/main/results_2023-12-28T09-30-28.410673.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6344608006071959,\n \"acc_stderr\": 0.03246479153994776,\n \"acc_norm\": 0.6364847695262345,\n \"acc_norm_stderr\": 0.03311425564391542,\n \"mc1\": 0.4039167686658507,\n \"mc1_stderr\": 0.017177276822584284,\n \"mc2\": 0.5669974462779248,\n \"mc2_stderr\": 0.015574572505669746\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6143344709897611,\n \"acc_stderr\": 0.01422425097325718,\n \"acc_norm\": 0.6629692832764505,\n \"acc_norm_stderr\": 0.013813476652902272\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6733718382792272,\n \"acc_stderr\": 0.004680215003395924,\n \"acc_norm\": 0.8580959968133838,\n \"acc_norm_stderr\": 0.003482384956632785\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.04232073695151589,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.04232073695151589\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6842105263157895,\n \"acc_stderr\": 0.0378272898086547,\n \"acc_norm\": 0.6842105263157895,\n \"acc_norm_stderr\": 0.0378272898086547\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6754716981132075,\n \"acc_stderr\": 0.02881561571343211,\n \"acc_norm\": 0.6754716981132075,\n \"acc_norm_stderr\": 0.02881561571343211\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7013888888888888,\n \"acc_stderr\": 0.03827052357950756,\n \"acc_norm\": 0.7013888888888888,\n \"acc_norm_stderr\": 0.03827052357950756\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.46,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.46,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.51,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.51,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.047258156262526045,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.047258156262526045\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6358381502890174,\n \"acc_stderr\": 0.03669072477416906,\n \"acc_norm\": 0.6358381502890174,\n \"acc_norm_stderr\": 0.03669072477416906\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4215686274509804,\n \"acc_stderr\": 0.04913595201274498,\n \"acc_norm\": 0.4215686274509804,\n \"acc_norm_stderr\": 0.04913595201274498\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.042923469599092816,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.042923469599092816\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5574468085106383,\n \"acc_stderr\": 0.032469569197899575,\n \"acc_norm\": 0.5574468085106383,\n \"acc_norm_stderr\": 0.032469569197899575\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4298245614035088,\n \"acc_stderr\": 0.04657047260594963,\n \"acc_norm\": 0.4298245614035088,\n \"acc_norm_stderr\": 0.04657047260594963\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5655172413793104,\n \"acc_stderr\": 0.04130740879555497,\n \"acc_norm\": 0.5655172413793104,\n \"acc_norm_stderr\": 0.04130740879555497\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3994708994708995,\n \"acc_stderr\": 0.02522545028406788,\n \"acc_norm\": 0.3994708994708995,\n \"acc_norm_stderr\": 0.02522545028406788\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.40476190476190477,\n \"acc_stderr\": 0.04390259265377562,\n \"acc_norm\": 0.40476190476190477,\n \"acc_norm_stderr\": 0.04390259265377562\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7419354838709677,\n \"acc_stderr\": 0.02489246917246283,\n \"acc_norm\": 0.7419354838709677,\n \"acc_norm_stderr\": 0.02489246917246283\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5172413793103449,\n \"acc_stderr\": 0.035158955511656986,\n \"acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.035158955511656986\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.67,\n \"acc_stderr\": 0.04725815626252609,\n \"acc_norm\": 0.67,\n \"acc_norm_stderr\": 0.04725815626252609\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7636363636363637,\n \"acc_stderr\": 0.033175059300091805,\n \"acc_norm\": 0.7636363636363637,\n \"acc_norm_stderr\": 0.033175059300091805\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7626262626262627,\n \"acc_stderr\": 0.0303137105381989,\n \"acc_norm\": 0.7626262626262627,\n \"acc_norm_stderr\": 0.0303137105381989\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8860103626943006,\n \"acc_stderr\": 0.022935144053919443,\n \"acc_norm\": 0.8860103626943006,\n \"acc_norm_stderr\": 0.022935144053919443\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6410256410256411,\n \"acc_stderr\": 0.024321738484602354,\n \"acc_norm\": 0.6410256410256411,\n \"acc_norm_stderr\": 0.024321738484602354\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.34074074074074073,\n \"acc_stderr\": 0.028897748741131143,\n \"acc_norm\": 0.34074074074074073,\n \"acc_norm_stderr\": 0.028897748741131143\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6554621848739496,\n \"acc_stderr\": 0.030868682604121626,\n \"acc_norm\": 0.6554621848739496,\n \"acc_norm_stderr\": 0.030868682604121626\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3509933774834437,\n \"acc_stderr\": 0.03896981964257375,\n \"acc_norm\": 0.3509933774834437,\n \"acc_norm_stderr\": 0.03896981964257375\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8238532110091743,\n \"acc_stderr\": 0.01633288239343138,\n \"acc_norm\": 0.8238532110091743,\n \"acc_norm_stderr\": 0.01633288239343138\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4861111111111111,\n \"acc_stderr\": 0.03408655867977748,\n \"acc_norm\": 0.4861111111111111,\n \"acc_norm_stderr\": 0.03408655867977748\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.803921568627451,\n \"acc_stderr\": 0.027865942286639318,\n \"acc_norm\": 0.803921568627451,\n \"acc_norm_stderr\": 0.027865942286639318\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8227848101265823,\n \"acc_stderr\": 0.02485636418450322,\n \"acc_norm\": 0.8227848101265823,\n \"acc_norm_stderr\": 0.02485636418450322\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6905829596412556,\n \"acc_stderr\": 0.03102441174057222,\n \"acc_norm\": 0.6905829596412556,\n \"acc_norm_stderr\": 0.03102441174057222\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7175572519083969,\n \"acc_stderr\": 0.03948406125768361,\n \"acc_norm\": 0.7175572519083969,\n \"acc_norm_stderr\": 0.03948406125768361\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7768595041322314,\n \"acc_stderr\": 0.03800754475228733,\n \"acc_norm\": 0.7768595041322314,\n \"acc_norm_stderr\": 0.03800754475228733\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7592592592592593,\n \"acc_stderr\": 0.041331194402438376,\n \"acc_norm\": 0.7592592592592593,\n \"acc_norm_stderr\": 0.041331194402438376\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.803680981595092,\n \"acc_stderr\": 0.031207970394709218,\n \"acc_norm\": 0.803680981595092,\n \"acc_norm_stderr\": 0.031207970394709218\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5267857142857143,\n \"acc_stderr\": 0.047389751192741546,\n \"acc_norm\": 0.5267857142857143,\n \"acc_norm_stderr\": 0.047389751192741546\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7864077669902912,\n \"acc_stderr\": 0.040580420156460344,\n \"acc_norm\": 0.7864077669902912,\n \"acc_norm_stderr\": 0.040580420156460344\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8589743589743589,\n \"acc_stderr\": 0.022801382534597528,\n \"acc_norm\": 0.8589743589743589,\n \"acc_norm_stderr\": 0.022801382534597528\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8058748403575989,\n \"acc_stderr\": 0.014143970276657569,\n \"acc_norm\": 0.8058748403575989,\n \"acc_norm_stderr\": 0.014143970276657569\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7196531791907514,\n \"acc_stderr\": 0.024182427496577612,\n \"acc_norm\": 0.7196531791907514,\n \"acc_norm_stderr\": 0.024182427496577612\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3575418994413408,\n \"acc_stderr\": 0.016029394474894886,\n \"acc_norm\": 0.3575418994413408,\n \"acc_norm_stderr\": 0.016029394474894886\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7352941176470589,\n \"acc_stderr\": 0.02526169121972948,\n \"acc_norm\": 0.7352941176470589,\n \"acc_norm_stderr\": 0.02526169121972948\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.707395498392283,\n \"acc_stderr\": 0.02583989833487798,\n \"acc_norm\": 0.707395498392283,\n \"acc_norm_stderr\": 0.02583989833487798\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7160493827160493,\n \"acc_stderr\": 0.02508947852376513,\n \"acc_norm\": 0.7160493827160493,\n \"acc_norm_stderr\": 0.02508947852376513\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.44680851063829785,\n \"acc_stderr\": 0.029658235097666904,\n \"acc_norm\": 0.44680851063829785,\n \"acc_norm_stderr\": 0.029658235097666904\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4680573663624511,\n \"acc_stderr\": 0.012744149704869647,\n \"acc_norm\": 0.4680573663624511,\n \"acc_norm_stderr\": 0.012744149704869647\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6470588235294118,\n \"acc_stderr\": 0.0290294228156814,\n \"acc_norm\": 0.6470588235294118,\n \"acc_norm_stderr\": 0.0290294228156814\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6584967320261438,\n \"acc_stderr\": 0.019184639328092487,\n \"acc_norm\": 0.6584967320261438,\n \"acc_norm_stderr\": 0.019184639328092487\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6545454545454545,\n \"acc_stderr\": 0.04554619617541054,\n \"acc_norm\": 0.6545454545454545,\n \"acc_norm_stderr\": 0.04554619617541054\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.710204081632653,\n \"acc_stderr\": 0.029043088683304328,\n \"acc_norm\": 0.710204081632653,\n \"acc_norm_stderr\": 0.029043088683304328\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.835820895522388,\n \"acc_stderr\": 0.026193923544454132,\n \"acc_norm\": 0.835820895522388,\n \"acc_norm_stderr\": 0.026193923544454132\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.89,\n \"acc_stderr\": 0.03144660377352202,\n \"acc_norm\": 0.89,\n \"acc_norm_stderr\": 0.03144660377352202\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5180722891566265,\n \"acc_stderr\": 0.03889951252827216,\n \"acc_norm\": 0.5180722891566265,\n \"acc_norm_stderr\": 0.03889951252827216\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8187134502923976,\n \"acc_stderr\": 0.029547741687640038,\n \"acc_norm\": 0.8187134502923976,\n \"acc_norm_stderr\": 0.029547741687640038\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.4039167686658507,\n \"mc1_stderr\": 0.017177276822584284,\n \"mc2\": 0.5669974462779248,\n \"mc2_stderr\": 0.015574572505669746\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7884767166535123,\n \"acc_stderr\": 0.011477747684223194\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5943896891584534,\n \"acc_stderr\": 0.013524848894462113\n }\n}\n```", "repo_url": "https://huggingface.co/xDAN-AI/xDAN-L1-Chat-RL-v1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|arc:challenge|25_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|gsm8k|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hellaswag|10_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-28T09-30-28.410673.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["**/details_harness|winogrande|5_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-28T09-30-28.410673.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_28T09_30_28.410673", "path": ["results_2023-12-28T09-30-28.410673.parquet"]}, {"split": "latest", "path": ["results_2023-12-28T09-30-28.410673.parquet"]}]}]}
2023-12-28T09:33:05+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of xDAN-AI/xDAN-L1-Chat-RL-v1 Dataset automatically created during the evaluation run of model xDAN-AI/xDAN-L1-Chat-RL-v1 on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-28T09:30:28.410673(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of xDAN-AI/xDAN-L1-Chat-RL-v1\n\n\n\nDataset automatically created during the evaluation run of model xDAN-AI/xDAN-L1-Chat-RL-v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-28T09:30:28.410673(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of xDAN-AI/xDAN-L1-Chat-RL-v1\n\n\n\nDataset automatically created during the evaluation run of model xDAN-AI/xDAN-L1-Chat-RL-v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-28T09:30:28.410673(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 193, 68, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of xDAN-AI/xDAN-L1-Chat-RL-v1\n\n\n\nDataset automatically created during the evaluation run of model xDAN-AI/xDAN-L1-Chat-RL-v1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-28T09:30:28.410673(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]" ]
02401f2c9438a740884240b9c9ceffaa3e0856f7
amphora/finqa_suite
[ "license:mit", "region:us" ]
2023-12-28T09:35:02+00:00
{"license": "mit", "configs": [{"config_name": "mmlu_professional_accounting", "data_files": [{"split": "test", "path": "data/mmlu_pa_test.csv"}, {"split": "dev", "path": "data/mmlu_professional_accounting_dev.csv"}]}, {"config_name": "mmlu_high_school_microeconomics", "data_files": [{"split": "test", "path": "data/mmlu_mi_test.csv"}, {"split": "dev", "path": "data/mmlu_high_school_microeconomics_dev.csv"}]}, {"config_name": "mmlu_high_school_macroeconomics", "data_files": [{"split": "test", "path": "data/mmlu_ma_test.csv"}, {"split": "dev", "path": "data/mmlu_high_school_macroeconomics_dev.csv"}]}, {"config_name": "mmlu_econometrics", "data_files": [{"split": "test", "path": "data/mmlu_em_test.csv"}, {"split": "dev", "path": "data/mmlu_econometrics_dev.csv"}]}, {"config_name": "finqa", "data_files": [{"split": "test", "path": "data/finqa_test.csv"}, {"split": "dev", "path": "data/finqa_dev.csv"}]}, {"config_name": "convfinqa", "data_files": [{"split": "test", "path": "data/convfinqa_test.csv"}, {"split": "dev", "path": "data/convfinqa_valid.csv"}]}]}
2024-01-15T08:07:22+00:00
[]
[]
TAGS #license-mit #region-us
[]
[ "TAGS\n#license-mit #region-us \n" ]
[ 11 ]
[ "passage: TAGS\n#license-mit #region-us \n" ]
59a19c99b1fe099cc5621968e760605a1d3ac8e0
# Dataset Card for "CVE_explain_dataset" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
Nganlt/CVE_explain_dataset
[ "region:us" ]
2023-12-28T09:41:42+00:00
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 47186572, "num_examples": 44988}], "download_size": 11209470, "dataset_size": 47186572}}
2023-12-28T09:43:40+00:00
[]
[]
TAGS #region-us
# Dataset Card for "CVE_explain_dataset" More Information needed
[ "# Dataset Card for \"CVE_explain_dataset\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"CVE_explain_dataset\"\n\nMore Information needed" ]
[ 6, 18 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"CVE_explain_dataset\"\n\nMore Information needed" ]
405cdae456f56fe527eefc8be2630ac6db48093d
# Text embedding Datasets The text embedding datasets consist of several (query, passage) paired datasets aiming for text-embedding model finetuning. These datasets are ideal for developing and testing algorithms in the fields of natural language processing, information retrieval, and similar applications. ## Dataset Details Each dataset in this collection is structured to facilitate the training and evaluation of text-embedding models. The datasets are diverse, covering multiple domains and formats. They are particularly useful for tasks like semantic search, question-answering systems, and document retrieval. ### [MOOC MCQ Queries] The "MOOC MCQ Queries" dataset is derived from [FUN MOOC](https://www.fun-mooc.fr/fr/), an online platform offering a wide range of French courses across various domains. This dataset is uniquely valuable for its high-quality content, manually curated to assist students in understanding course materials better. #### Content Overview: - **Language**: French - **Domains**: - History: 57 examples - Religion: 125 examples - [Other domains to be added] - **Dataset Description**: Each record in the dataset includes the following fields: ```json { "query_id": "Unique identifier for each query", "query": "Text of the multiple-choice question (MCQ)", "answers": ["List of correct answer choices"], "distractions": ["List of incorrect choices"], "relevant_docs": ["List of relevant document IDs aiding the answer"] } ``` - **statistics**: | Category | Num. of Queries | Query Avg. Words | Number of Docs | Short Docs (<375 words) | Long Docs (≥375 words) | Doc Avg. Words | |----------------|-----------------|------------------|----------------|-------------------------|------------------------|----------------| | history | 57 | 11.31 | 224 | 147 | 77 | 351.79 | | religion | 125 | 15.08 | 126 | 78 | 48 | 375.63 | | recherche | 52 | 12.71 | 69 | 20 | 49 | 535.00 | | python | 85 | 21.24 | 194 | 27 | 167 | 552.60 | ### [Wikitext generated Queries] To complete ### [Documents] This dataset is an extensive collection of document chunkings or entire document for short texts, designed to complement the MOOC MCQ Queries and other datasets in the collection. - **chunking strategies**: - MOOC MCQ Queries: documents are chunked according to their natural divisions, like sections or subsections, ensuring that each chunk maintains contextual integrity. - **content format**: ```json { "doc_id": "Unique identifier for each document", "doc": "Text content of the document" } ```
ProfessorBob/text-embedding-dataset
[ "region:us" ]
2023-12-28T10:23:48+00:00
{"dataset_info": [{"config_name": "Documents", "features": [{"name": "doc_id", "dtype": "string"}, {"name": "doc", "dtype": "string"}], "splits": [{"name": "history", "num_bytes": 508218, "num_examples": 224}, {"name": "religion", "num_bytes": 302837, "num_examples": 126}, {"name": "recherche", "num_bytes": 235256, "num_examples": 69}, {"name": "python", "num_bytes": 660763, "num_examples": 194}], "download_size": 952235, "dataset_size": 1707074}, {"config_name": "MOOC_MCQ_Queries", "features": [{"name": "query_id", "dtype": "string"}, {"name": "query", "dtype": "string"}, {"name": "answers", "sequence": "string"}, {"name": "distractions", "sequence": "string"}, {"name": "relevant_docs", "sequence": "string"}], "splits": [{"name": "history", "num_bytes": 13156, "num_examples": 58}, {"name": "religion", "num_bytes": 52563, "num_examples": 125}, {"name": "recherche", "num_bytes": 18791, "num_examples": 52}, {"name": "python", "num_bytes": 29759, "num_examples": 85}], "download_size": 80494, "dataset_size": 114269}], "configs": [{"config_name": "Documents", "data_files": [{"split": "history", "path": "Documents/history-*"}, {"split": "religion", "path": "Documents/religion-*"}, {"split": "recherche", "path": "Documents/recherche-*"}, {"split": "python", "path": "Documents/python-*"}]}, {"config_name": "MOOC_MCQ_Queries", "data_files": [{"split": "history", "path": "MOOC_MCQ_Queries/history-*"}, {"split": "religion", "path": "MOOC_MCQ_Queries/religion-*"}, {"split": "recherche", "path": "MOOC_MCQ_Queries/recherche-*"}, {"split": "python", "path": "MOOC_MCQ_Queries/python-*"}]}]}
2024-01-02T14:42:35+00:00
[]
[]
TAGS #region-us
Text embedding Datasets ======================= The text embedding datasets consist of several (query, passage) paired datasets aiming for text-embedding model finetuning. These datasets are ideal for developing and testing algorithms in the fields of natural language processing, information retrieval, and similar applications. Dataset Details --------------- Each dataset in this collection is structured to facilitate the training and evaluation of text-embedding models. The datasets are diverse, covering multiple domains and formats. They are particularly useful for tasks like semantic search, question-answering systems, and document retrieval. ### [MOOC MCQ Queries] The "MOOC MCQ Queries" dataset is derived from FUN MOOC, an online platform offering a wide range of French courses across various domains. This dataset is uniquely valuable for its high-quality content, manually curated to assist students in understanding course materials better. #### Content Overview: * Language: French * Domains: + History: 57 examples + Religion: 125 examples + [Other domains to be added] * Dataset Description: Each record in the dataset includes the following fields: * statistics: ### [Wikitext generated Queries] To complete ### [Documents] This dataset is an extensive collection of document chunkings or entire document for short texts, designed to complement the MOOC MCQ Queries and other datasets in the collection. * chunking strategies: + MOOC MCQ Queries: documents are chunked according to their natural divisions, like sections or subsections, ensuring that each chunk maintains contextual integrity. * content format:
[ "### [MOOC MCQ Queries]\n\n\nThe \"MOOC MCQ Queries\" dataset is derived from FUN MOOC, an online platform offering a wide range of French courses across various domains. This dataset is uniquely valuable for its high-quality content, manually curated to assist students in understanding course materials better.", "#### Content Overview:\n\n\n* Language: French\n* Domains:\n\n\n\t+ History: 57 examples\n\t+ Religion: 125 examples\n\t+ [Other domains to be added]\n* Dataset Description:\nEach record in the dataset includes the following fields:\n* statistics:", "### [Wikitext generated Queries]\n\n\nTo complete", "### [Documents]\n\n\nThis dataset is an extensive collection of document chunkings or entire document for short texts, designed to complement the MOOC MCQ Queries and other datasets in the collection.\n\n\n* chunking strategies:\n\t+ MOOC MCQ Queries: documents are chunked according to their natural divisions, like sections or subsections, ensuring that each chunk maintains contextual integrity.\n* content format:" ]
[ "TAGS\n#region-us \n", "### [MOOC MCQ Queries]\n\n\nThe \"MOOC MCQ Queries\" dataset is derived from FUN MOOC, an online platform offering a wide range of French courses across various domains. This dataset is uniquely valuable for its high-quality content, manually curated to assist students in understanding course materials better.", "#### Content Overview:\n\n\n* Language: French\n* Domains:\n\n\n\t+ History: 57 examples\n\t+ Religion: 125 examples\n\t+ [Other domains to be added]\n* Dataset Description:\nEach record in the dataset includes the following fields:\n* statistics:", "### [Wikitext generated Queries]\n\n\nTo complete", "### [Documents]\n\n\nThis dataset is an extensive collection of document chunkings or entire document for short texts, designed to complement the MOOC MCQ Queries and other datasets in the collection.\n\n\n* chunking strategies:\n\t+ MOOC MCQ Queries: documents are chunked according to their natural divisions, like sections or subsections, ensuring that each chunk maintains contextual integrity.\n* content format:" ]
[ 6, 71, 57, 12, 99 ]
[ "passage: TAGS\n#region-us \n### [MOOC MCQ Queries]\n\n\nThe \"MOOC MCQ Queries\" dataset is derived from FUN MOOC, an online platform offering a wide range of French courses across various domains. This dataset is uniquely valuable for its high-quality content, manually curated to assist students in understanding course materials better.#### Content Overview:\n\n\n* Language: French\n* Domains:\n\n\n\t+ History: 57 examples\n\t+ Religion: 125 examples\n\t+ [Other domains to be added]\n* Dataset Description:\nEach record in the dataset includes the following fields:\n* statistics:### [Wikitext generated Queries]\n\n\nTo complete### [Documents]\n\n\nThis dataset is an extensive collection of document chunkings or entire document for short texts, designed to complement the MOOC MCQ Queries and other datasets in the collection.\n\n\n* chunking strategies:\n\t+ MOOC MCQ Queries: documents are chunked according to their natural divisions, like sections or subsections, ensuring that each chunk maintains contextual integrity.\n* content format:" ]
1614b57a677c4f21d87a95ca8b38f3bde8cb8b60
# Marvel Characters Dataset This dataset includes a compilation of various Marvel characters, their first appearances in films or TV shows, and the actors who portrayed them. ## Contents - [Overview](#overview) - [Dataset Details](#dataset-details) - [Usage](#usage) - [License](#license) ## Overview This dataset contains information about a wide range of Marvel characters, their debut appearances in films or TV shows, and the actors who played those roles. The data could be used for analysis, reference, or any related projects involving Marvel characters. ## Dataset Details The dataset is organized as follows: - Character Name - Description - First Appearance in Film/TV Show - Actor Name The data is compiled from various sources and covers a diverse array of characters from the Marvel Universe. ## Usage Feel free to use this dataset for analysis, research, or any creative projects related to Marvel characters. If you use this dataset in any publication or project, please provide credit by referencing this repository. ## License This dataset is provided under the [MIT License](LICENSE). See the License file for more details.
Manvith/Marvel_dataset
[ "region:us" ]
2023-12-28T10:24:22+00:00
{}
2023-12-30T11:30:31+00:00
[]
[]
TAGS #region-us
# Marvel Characters Dataset This dataset includes a compilation of various Marvel characters, their first appearances in films or TV shows, and the actors who portrayed them. ## Contents - Overview - Dataset Details - Usage - License ## Overview This dataset contains information about a wide range of Marvel characters, their debut appearances in films or TV shows, and the actors who played those roles. The data could be used for analysis, reference, or any related projects involving Marvel characters. ## Dataset Details The dataset is organized as follows: - Character Name - Description - First Appearance in Film/TV Show - Actor Name The data is compiled from various sources and covers a diverse array of characters from the Marvel Universe. ## Usage Feel free to use this dataset for analysis, research, or any creative projects related to Marvel characters. If you use this dataset in any publication or project, please provide credit by referencing this repository. ## License This dataset is provided under the MIT License. See the License file for more details.
[ "# Marvel Characters Dataset\n\nThis dataset includes a compilation of various Marvel characters, their first appearances in films or TV shows, and the actors who portrayed them.", "## Contents\n\n- Overview\n- Dataset Details\n- Usage\n- License", "## Overview\n\nThis dataset contains information about a wide range of Marvel characters, their debut appearances in films or TV shows, and the actors who played those roles. The data could be used for analysis, reference, or any related projects involving Marvel characters.", "## Dataset Details\n\nThe dataset is organized as follows:\n- Character Name\n- Description\n- First Appearance in Film/TV Show\n- Actor Name\n\nThe data is compiled from various sources and covers a diverse array of characters from the Marvel Universe.", "## Usage\n\nFeel free to use this dataset for analysis, research, or any creative projects related to Marvel characters. If you use this dataset in any publication or project, please provide credit by referencing this repository.", "## License\n\nThis dataset is provided under the MIT License. See the License file for more details." ]
[ "TAGS\n#region-us \n", "# Marvel Characters Dataset\n\nThis dataset includes a compilation of various Marvel characters, their first appearances in films or TV shows, and the actors who portrayed them.", "## Contents\n\n- Overview\n- Dataset Details\n- Usage\n- License", "## Overview\n\nThis dataset contains information about a wide range of Marvel characters, their debut appearances in films or TV shows, and the actors who played those roles. The data could be used for analysis, reference, or any related projects involving Marvel characters.", "## Dataset Details\n\nThe dataset is organized as follows:\n- Character Name\n- Description\n- First Appearance in Film/TV Show\n- Actor Name\n\nThe data is compiled from various sources and covers a diverse array of characters from the Marvel Universe.", "## Usage\n\nFeel free to use this dataset for analysis, research, or any creative projects related to Marvel characters. If you use this dataset in any publication or project, please provide credit by referencing this repository.", "## License\n\nThis dataset is provided under the MIT License. See the License file for more details." ]
[ 6, 39, 15, 57, 57, 47, 20 ]
[ "passage: TAGS\n#region-us \n# Marvel Characters Dataset\n\nThis dataset includes a compilation of various Marvel characters, their first appearances in films or TV shows, and the actors who portrayed them.## Contents\n\n- Overview\n- Dataset Details\n- Usage\n- License## Overview\n\nThis dataset contains information about a wide range of Marvel characters, their debut appearances in films or TV shows, and the actors who played those roles. The data could be used for analysis, reference, or any related projects involving Marvel characters.## Dataset Details\n\nThe dataset is organized as follows:\n- Character Name\n- Description\n- First Appearance in Film/TV Show\n- Actor Name\n\nThe data is compiled from various sources and covers a diverse array of characters from the Marvel Universe.## Usage\n\nFeel free to use this dataset for analysis, research, or any creative projects related to Marvel characters. If you use this dataset in any publication or project, please provide credit by referencing this repository.## License\n\nThis dataset is provided under the MIT License. See the License file for more details." ]
c6d497b96a2bb4b4dcd38587e3b2cc3272f8e82b
# Dataset Card for Evaluation run of Azure99/blossom-v4-mistral-7b <!-- Provide a quick summary of the dataset. --> Dataset automatically created during the evaluation run of model [Azure99/blossom-v4-mistral-7b](https://huggingface.co/Azure99/blossom-v4-mistral-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard). The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)). To load the details from a run, you can for instance do the following: ```python from datasets import load_dataset data = load_dataset("open-llm-leaderboard/details_Azure99__blossom-v4-mistral-7b", "harness_winogrande_5", split="train") ``` ## Latest results These are the [latest results from run 2023-12-28T11:10:20.298869](https://huggingface.co/datasets/open-llm-leaderboard/details_Azure99__blossom-v4-mistral-7b/blob/main/results_2023-12-28T11-10-20.298869.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ```python { "all": { "acc": 0.6235420002967518, "acc_stderr": 0.03272388603364805, "acc_norm": 0.6281854377869052, "acc_norm_stderr": 0.03338061598239654, "mc1": 0.36964504283965727, "mc1_stderr": 0.016898180706973888, "mc2": 0.5384391963865467, "mc2_stderr": 0.015414673673859326 }, "harness|arc:challenge|25": { "acc": 0.5793515358361775, "acc_stderr": 0.014426211252508397, "acc_norm": 0.6203071672354948, "acc_norm_stderr": 0.014182119866974872 }, "harness|hellaswag|10": { "acc": 0.6390161322445728, "acc_stderr": 0.004793042992396035, "acc_norm": 0.8290181238797052, "acc_norm_stderr": 0.0037572368063973345 }, "harness|hendrycksTest-abstract_algebra|5": { "acc": 0.28, "acc_stderr": 0.04512608598542129, "acc_norm": 0.28, "acc_norm_stderr": 0.04512608598542129 }, "harness|hendrycksTest-anatomy|5": { "acc": 0.5851851851851851, "acc_stderr": 0.04256193767901409, "acc_norm": 0.5851851851851851, "acc_norm_stderr": 0.04256193767901409 }, "harness|hendrycksTest-astronomy|5": { "acc": 0.6381578947368421, "acc_stderr": 0.03910525752849724, "acc_norm": 0.6381578947368421, "acc_norm_stderr": 0.03910525752849724 }, "harness|hendrycksTest-business_ethics|5": { "acc": 0.62, "acc_stderr": 0.048783173121456316, "acc_norm": 0.62, "acc_norm_stderr": 0.048783173121456316 }, "harness|hendrycksTest-clinical_knowledge|5": { "acc": 0.6943396226415094, "acc_stderr": 0.028353298073322666, "acc_norm": 0.6943396226415094, "acc_norm_stderr": 0.028353298073322666 }, "harness|hendrycksTest-college_biology|5": { "acc": 0.7152777777777778, "acc_stderr": 0.03773809990686934, "acc_norm": 0.7152777777777778, "acc_norm_stderr": 0.03773809990686934 }, "harness|hendrycksTest-college_chemistry|5": { "acc": 0.45, "acc_stderr": 0.05, "acc_norm": 0.45, "acc_norm_stderr": 0.05 }, "harness|hendrycksTest-college_computer_science|5": { "acc": 0.5, "acc_stderr": 0.050251890762960605, "acc_norm": 0.5, "acc_norm_stderr": 0.050251890762960605 }, "harness|hendrycksTest-college_mathematics|5": { "acc": 0.36, "acc_stderr": 0.04824181513244218, "acc_norm": 0.36, "acc_norm_stderr": 0.04824181513244218 }, "harness|hendrycksTest-college_medicine|5": { "acc": 0.5838150289017341, "acc_stderr": 0.03758517775404947, "acc_norm": 0.5838150289017341, "acc_norm_stderr": 0.03758517775404947 }, "harness|hendrycksTest-college_physics|5": { "acc": 0.4019607843137255, "acc_stderr": 0.048786087144669955, "acc_norm": 0.4019607843137255, "acc_norm_stderr": 0.048786087144669955 }, "harness|hendrycksTest-computer_security|5": { "acc": 0.74, "acc_stderr": 0.0440844002276808, "acc_norm": 0.74, "acc_norm_stderr": 0.0440844002276808 }, "harness|hendrycksTest-conceptual_physics|5": { "acc": 0.5787234042553191, "acc_stderr": 0.03227834510146268, "acc_norm": 0.5787234042553191, "acc_norm_stderr": 0.03227834510146268 }, "harness|hendrycksTest-econometrics|5": { "acc": 0.49122807017543857, "acc_stderr": 0.04702880432049615, "acc_norm": 0.49122807017543857, "acc_norm_stderr": 0.04702880432049615 }, "harness|hendrycksTest-electrical_engineering|5": { "acc": 0.5724137931034483, "acc_stderr": 0.04122737111370333, "acc_norm": 0.5724137931034483, "acc_norm_stderr": 0.04122737111370333 }, "harness|hendrycksTest-elementary_mathematics|5": { "acc": 0.41005291005291006, "acc_stderr": 0.02533120243894443, "acc_norm": 0.41005291005291006, "acc_norm_stderr": 0.02533120243894443 }, "harness|hendrycksTest-formal_logic|5": { "acc": 0.3968253968253968, "acc_stderr": 0.04375888492727061, "acc_norm": 0.3968253968253968, "acc_norm_stderr": 0.04375888492727061 }, "harness|hendrycksTest-global_facts|5": { "acc": 0.4, "acc_stderr": 0.049236596391733084, "acc_norm": 0.4, "acc_norm_stderr": 0.049236596391733084 }, "harness|hendrycksTest-high_school_biology|5": { "acc": 0.7322580645161291, "acc_stderr": 0.025189006660212378, "acc_norm": 0.7322580645161291, "acc_norm_stderr": 0.025189006660212378 }, "harness|hendrycksTest-high_school_chemistry|5": { "acc": 0.5270935960591133, "acc_stderr": 0.03512819077876106, "acc_norm": 0.5270935960591133, "acc_norm_stderr": 0.03512819077876106 }, "harness|hendrycksTest-high_school_computer_science|5": { "acc": 0.68, "acc_stderr": 0.04688261722621505, "acc_norm": 0.68, "acc_norm_stderr": 0.04688261722621505 }, "harness|hendrycksTest-high_school_european_history|5": { "acc": 0.7696969696969697, "acc_stderr": 0.0328766675860349, "acc_norm": 0.7696969696969697, "acc_norm_stderr": 0.0328766675860349 }, "harness|hendrycksTest-high_school_geography|5": { "acc": 0.7828282828282829, "acc_stderr": 0.029376616484945627, "acc_norm": 0.7828282828282829, "acc_norm_stderr": 0.029376616484945627 }, "harness|hendrycksTest-high_school_government_and_politics|5": { "acc": 0.8549222797927462, "acc_stderr": 0.025416343096306433, "acc_norm": 0.8549222797927462, "acc_norm_stderr": 0.025416343096306433 }, "harness|hendrycksTest-high_school_macroeconomics|5": { "acc": 0.6153846153846154, "acc_stderr": 0.024666744915187208, "acc_norm": 0.6153846153846154, "acc_norm_stderr": 0.024666744915187208 }, "harness|hendrycksTest-high_school_mathematics|5": { "acc": 0.3148148148148148, "acc_stderr": 0.028317533496066468, "acc_norm": 0.3148148148148148, "acc_norm_stderr": 0.028317533496066468 }, "harness|hendrycksTest-high_school_microeconomics|5": { "acc": 0.6554621848739496, "acc_stderr": 0.030868682604121622, "acc_norm": 0.6554621848739496, "acc_norm_stderr": 0.030868682604121622 }, "harness|hendrycksTest-high_school_physics|5": { "acc": 0.32450331125827814, "acc_stderr": 0.03822746937658753, "acc_norm": 0.32450331125827814, "acc_norm_stderr": 0.03822746937658753 }, "harness|hendrycksTest-high_school_psychology|5": { "acc": 0.8018348623853211, "acc_stderr": 0.017090573804217902, "acc_norm": 0.8018348623853211, "acc_norm_stderr": 0.017090573804217902 }, "harness|hendrycksTest-high_school_statistics|5": { "acc": 0.4675925925925926, "acc_stderr": 0.03402801581358966, "acc_norm": 0.4675925925925926, "acc_norm_stderr": 0.03402801581358966 }, "harness|hendrycksTest-high_school_us_history|5": { "acc": 0.8235294117647058, "acc_stderr": 0.026756401538078966, "acc_norm": 0.8235294117647058, "acc_norm_stderr": 0.026756401538078966 }, "harness|hendrycksTest-high_school_world_history|5": { "acc": 0.7721518987341772, "acc_stderr": 0.027303484599069422, "acc_norm": 0.7721518987341772, "acc_norm_stderr": 0.027303484599069422 }, "harness|hendrycksTest-human_aging|5": { "acc": 0.6502242152466368, "acc_stderr": 0.03200736719484503, "acc_norm": 0.6502242152466368, "acc_norm_stderr": 0.03200736719484503 }, "harness|hendrycksTest-human_sexuality|5": { "acc": 0.7633587786259542, "acc_stderr": 0.03727673575596913, "acc_norm": 0.7633587786259542, "acc_norm_stderr": 0.03727673575596913 }, "harness|hendrycksTest-international_law|5": { "acc": 0.7603305785123967, "acc_stderr": 0.03896878985070417, "acc_norm": 0.7603305785123967, "acc_norm_stderr": 0.03896878985070417 }, "harness|hendrycksTest-jurisprudence|5": { "acc": 0.75, "acc_stderr": 0.04186091791394607, "acc_norm": 0.75, "acc_norm_stderr": 0.04186091791394607 }, "harness|hendrycksTest-logical_fallacies|5": { "acc": 0.7484662576687117, "acc_stderr": 0.03408997886857529, "acc_norm": 0.7484662576687117, "acc_norm_stderr": 0.03408997886857529 }, "harness|hendrycksTest-machine_learning|5": { "acc": 0.49107142857142855, "acc_stderr": 0.04745033255489123, "acc_norm": 0.49107142857142855, "acc_norm_stderr": 0.04745033255489123 }, "harness|hendrycksTest-management|5": { "acc": 0.7961165048543689, "acc_stderr": 0.039891398595317706, "acc_norm": 0.7961165048543689, "acc_norm_stderr": 0.039891398595317706 }, "harness|hendrycksTest-marketing|5": { "acc": 0.8760683760683761, "acc_stderr": 0.02158649400128136, "acc_norm": 0.8760683760683761, "acc_norm_stderr": 0.02158649400128136 }, "harness|hendrycksTest-medical_genetics|5": { "acc": 0.66, "acc_stderr": 0.04760952285695237, "acc_norm": 0.66, "acc_norm_stderr": 0.04760952285695237 }, "harness|hendrycksTest-miscellaneous|5": { "acc": 0.80970625798212, "acc_stderr": 0.014036945850381398, "acc_norm": 0.80970625798212, "acc_norm_stderr": 0.014036945850381398 }, "harness|hendrycksTest-moral_disputes|5": { "acc": 0.6878612716763006, "acc_stderr": 0.024946792225272314, "acc_norm": 0.6878612716763006, "acc_norm_stderr": 0.024946792225272314 }, "harness|hendrycksTest-moral_scenarios|5": { "acc": 0.3329608938547486, "acc_stderr": 0.015761716178397566, "acc_norm": 0.3329608938547486, "acc_norm_stderr": 0.015761716178397566 }, "harness|hendrycksTest-nutrition|5": { "acc": 0.7189542483660131, "acc_stderr": 0.025738854797818737, "acc_norm": 0.7189542483660131, "acc_norm_stderr": 0.025738854797818737 }, "harness|hendrycksTest-philosophy|5": { "acc": 0.707395498392283, "acc_stderr": 0.02583989833487798, "acc_norm": 0.707395498392283, "acc_norm_stderr": 0.02583989833487798 }, "harness|hendrycksTest-prehistory|5": { "acc": 0.7129629629629629, "acc_stderr": 0.02517104191530968, "acc_norm": 0.7129629629629629, "acc_norm_stderr": 0.02517104191530968 }, "harness|hendrycksTest-professional_accounting|5": { "acc": 0.4929078014184397, "acc_stderr": 0.02982449855912901, "acc_norm": 0.4929078014184397, "acc_norm_stderr": 0.02982449855912901 }, "harness|hendrycksTest-professional_law|5": { "acc": 0.4511082138200782, "acc_stderr": 0.012709037347346233, "acc_norm": 0.4511082138200782, "acc_norm_stderr": 0.012709037347346233 }, "harness|hendrycksTest-professional_medicine|5": { "acc": 0.6323529411764706, "acc_stderr": 0.029289413409403192, "acc_norm": 0.6323529411764706, "acc_norm_stderr": 0.029289413409403192 }, "harness|hendrycksTest-professional_psychology|5": { "acc": 0.6584967320261438, "acc_stderr": 0.01918463932809249, "acc_norm": 0.6584967320261438, "acc_norm_stderr": 0.01918463932809249 }, "harness|hendrycksTest-public_relations|5": { "acc": 0.6454545454545455, "acc_stderr": 0.04582004841505417, "acc_norm": 0.6454545454545455, "acc_norm_stderr": 0.04582004841505417 }, "harness|hendrycksTest-security_studies|5": { "acc": 0.6979591836734694, "acc_stderr": 0.029393609319879804, "acc_norm": 0.6979591836734694, "acc_norm_stderr": 0.029393609319879804 }, "harness|hendrycksTest-sociology|5": { "acc": 0.8407960199004975, "acc_stderr": 0.02587064676616913, "acc_norm": 0.8407960199004975, "acc_norm_stderr": 0.02587064676616913 }, "harness|hendrycksTest-us_foreign_policy|5": { "acc": 0.84, "acc_stderr": 0.03684529491774709, "acc_norm": 0.84, "acc_norm_stderr": 0.03684529491774709 }, "harness|hendrycksTest-virology|5": { "acc": 0.536144578313253, "acc_stderr": 0.038823108508905954, "acc_norm": 0.536144578313253, "acc_norm_stderr": 0.038823108508905954 }, "harness|hendrycksTest-world_religions|5": { "acc": 0.8128654970760234, "acc_stderr": 0.029913127232368036, "acc_norm": 0.8128654970760234, "acc_norm_stderr": 0.029913127232368036 }, "harness|truthfulqa:mc|0": { "mc1": 0.36964504283965727, "mc1_stderr": 0.016898180706973888, "mc2": 0.5384391963865467, "mc2_stderr": 0.015414673673859326 }, "harness|winogrande|5": { "acc": 0.7726913970007893, "acc_stderr": 0.011778612167091087 }, "harness|gsm8k|5": { "acc": 0.4313874147081122, "acc_stderr": 0.013642195352511575 } } ``` ## Dataset Details ### Dataset Description <!-- Provide a longer summary of what this dataset is. --> - **Curated by:** [More Information Needed] - **Funded by [optional]:** [More Information Needed] - **Shared by [optional]:** [More Information Needed] - **Language(s) (NLP):** [More Information Needed] - **License:** [More Information Needed] ### Dataset Sources [optional] <!-- Provide the basic links for the dataset. --> - **Repository:** [More Information Needed] - **Paper [optional]:** [More Information Needed] - **Demo [optional]:** [More Information Needed] ## Uses <!-- Address questions around how the dataset is intended to be used. --> ### Direct Use <!-- This section describes suitable use cases for the dataset. --> [More Information Needed] ### Out-of-Scope Use <!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. --> [More Information Needed] ## Dataset Structure <!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. --> [More Information Needed] ## Dataset Creation ### Curation Rationale <!-- Motivation for the creation of this dataset. --> [More Information Needed] ### Source Data <!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). --> #### Data Collection and Processing <!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. --> [More Information Needed] #### Who are the source data producers? <!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. --> [More Information Needed] ### Annotations [optional] <!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. --> #### Annotation process <!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. --> [More Information Needed] #### Who are the annotators? <!-- This section describes the people or systems who created the annotations. --> [More Information Needed] #### Personal and Sensitive Information <!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. --> [More Information Needed] ## Bias, Risks, and Limitations <!-- This section is meant to convey both technical and sociotechnical limitations. --> [More Information Needed] ### Recommendations <!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. --> Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. ## Citation [optional] <!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. --> **BibTeX:** [More Information Needed] **APA:** [More Information Needed] ## Glossary [optional] <!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. --> [More Information Needed] ## More Information [optional] [More Information Needed] ## Dataset Card Authors [optional] [More Information Needed] ## Dataset Card Contact [More Information Needed]
open-llm-leaderboard/details_Azure99__blossom-v4-mistral-7b
[ "region:us" ]
2023-12-28T11:12:37+00:00
{"pretty_name": "Evaluation run of Azure99/blossom-v4-mistral-7b", "dataset_summary": "Dataset automatically created during the evaluation run of model [Azure99/blossom-v4-mistral-7b](https://huggingface.co/Azure99/blossom-v4-mistral-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Azure99__blossom-v4-mistral-7b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-28T11:10:20.298869](https://huggingface.co/datasets/open-llm-leaderboard/details_Azure99__blossom-v4-mistral-7b/blob/main/results_2023-12-28T11-10-20.298869.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6235420002967518,\n \"acc_stderr\": 0.03272388603364805,\n \"acc_norm\": 0.6281854377869052,\n \"acc_norm_stderr\": 0.03338061598239654,\n \"mc1\": 0.36964504283965727,\n \"mc1_stderr\": 0.016898180706973888,\n \"mc2\": 0.5384391963865467,\n \"mc2_stderr\": 0.015414673673859326\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5793515358361775,\n \"acc_stderr\": 0.014426211252508397,\n \"acc_norm\": 0.6203071672354948,\n \"acc_norm_stderr\": 0.014182119866974872\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6390161322445728,\n \"acc_stderr\": 0.004793042992396035,\n \"acc_norm\": 0.8290181238797052,\n \"acc_norm_stderr\": 0.0037572368063973345\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.28,\n \"acc_stderr\": 0.04512608598542129,\n \"acc_norm\": 0.28,\n \"acc_norm_stderr\": 0.04512608598542129\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5851851851851851,\n \"acc_stderr\": 0.04256193767901409,\n \"acc_norm\": 0.5851851851851851,\n \"acc_norm_stderr\": 0.04256193767901409\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6381578947368421,\n \"acc_stderr\": 0.03910525752849724,\n \"acc_norm\": 0.6381578947368421,\n \"acc_norm_stderr\": 0.03910525752849724\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6943396226415094,\n \"acc_stderr\": 0.028353298073322666,\n \"acc_norm\": 0.6943396226415094,\n \"acc_norm_stderr\": 0.028353298073322666\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7152777777777778,\n \"acc_stderr\": 0.03773809990686934,\n \"acc_norm\": 0.7152777777777778,\n \"acc_norm_stderr\": 0.03773809990686934\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5838150289017341,\n \"acc_stderr\": 0.03758517775404947,\n \"acc_norm\": 0.5838150289017341,\n \"acc_norm_stderr\": 0.03758517775404947\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4019607843137255,\n \"acc_stderr\": 0.048786087144669955,\n \"acc_norm\": 0.4019607843137255,\n \"acc_norm_stderr\": 0.048786087144669955\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.0440844002276808,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.0440844002276808\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5787234042553191,\n \"acc_stderr\": 0.03227834510146268,\n \"acc_norm\": 0.5787234042553191,\n \"acc_norm_stderr\": 0.03227834510146268\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.49122807017543857,\n \"acc_stderr\": 0.04702880432049615,\n \"acc_norm\": 0.49122807017543857,\n \"acc_norm_stderr\": 0.04702880432049615\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5724137931034483,\n \"acc_stderr\": 0.04122737111370333,\n \"acc_norm\": 0.5724137931034483,\n \"acc_norm_stderr\": 0.04122737111370333\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.41005291005291006,\n \"acc_stderr\": 0.02533120243894443,\n \"acc_norm\": 0.41005291005291006,\n \"acc_norm_stderr\": 0.02533120243894443\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3968253968253968,\n \"acc_stderr\": 0.04375888492727061,\n \"acc_norm\": 0.3968253968253968,\n \"acc_norm_stderr\": 0.04375888492727061\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.4,\n \"acc_stderr\": 0.049236596391733084,\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.049236596391733084\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7322580645161291,\n \"acc_stderr\": 0.025189006660212378,\n \"acc_norm\": 0.7322580645161291,\n \"acc_norm_stderr\": 0.025189006660212378\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5270935960591133,\n \"acc_stderr\": 0.03512819077876106,\n \"acc_norm\": 0.5270935960591133,\n \"acc_norm_stderr\": 0.03512819077876106\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621505,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.04688261722621505\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7696969696969697,\n \"acc_stderr\": 0.0328766675860349,\n \"acc_norm\": 0.7696969696969697,\n \"acc_norm_stderr\": 0.0328766675860349\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7828282828282829,\n \"acc_stderr\": 0.029376616484945627,\n \"acc_norm\": 0.7828282828282829,\n \"acc_norm_stderr\": 0.029376616484945627\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8549222797927462,\n \"acc_stderr\": 0.025416343096306433,\n \"acc_norm\": 0.8549222797927462,\n \"acc_norm_stderr\": 0.025416343096306433\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6153846153846154,\n \"acc_stderr\": 0.024666744915187208,\n \"acc_norm\": 0.6153846153846154,\n \"acc_norm_stderr\": 0.024666744915187208\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3148148148148148,\n \"acc_stderr\": 0.028317533496066468,\n \"acc_norm\": 0.3148148148148148,\n \"acc_norm_stderr\": 0.028317533496066468\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6554621848739496,\n \"acc_stderr\": 0.030868682604121622,\n \"acc_norm\": 0.6554621848739496,\n \"acc_norm_stderr\": 0.030868682604121622\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.32450331125827814,\n \"acc_stderr\": 0.03822746937658753,\n \"acc_norm\": 0.32450331125827814,\n \"acc_norm_stderr\": 0.03822746937658753\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8018348623853211,\n \"acc_stderr\": 0.017090573804217902,\n \"acc_norm\": 0.8018348623853211,\n \"acc_norm_stderr\": 0.017090573804217902\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4675925925925926,\n \"acc_stderr\": 0.03402801581358966,\n \"acc_norm\": 0.4675925925925926,\n \"acc_norm_stderr\": 0.03402801581358966\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8235294117647058,\n \"acc_stderr\": 0.026756401538078966,\n \"acc_norm\": 0.8235294117647058,\n \"acc_norm_stderr\": 0.026756401538078966\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7721518987341772,\n \"acc_stderr\": 0.027303484599069422,\n \"acc_norm\": 0.7721518987341772,\n \"acc_norm_stderr\": 0.027303484599069422\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6502242152466368,\n \"acc_stderr\": 0.03200736719484503,\n \"acc_norm\": 0.6502242152466368,\n \"acc_norm_stderr\": 0.03200736719484503\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7633587786259542,\n \"acc_stderr\": 0.03727673575596913,\n \"acc_norm\": 0.7633587786259542,\n \"acc_norm_stderr\": 0.03727673575596913\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7603305785123967,\n \"acc_stderr\": 0.03896878985070417,\n \"acc_norm\": 0.7603305785123967,\n \"acc_norm_stderr\": 0.03896878985070417\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04186091791394607,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04186091791394607\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7484662576687117,\n \"acc_stderr\": 0.03408997886857529,\n \"acc_norm\": 0.7484662576687117,\n \"acc_norm_stderr\": 0.03408997886857529\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.49107142857142855,\n \"acc_stderr\": 0.04745033255489123,\n \"acc_norm\": 0.49107142857142855,\n \"acc_norm_stderr\": 0.04745033255489123\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7961165048543689,\n \"acc_stderr\": 0.039891398595317706,\n \"acc_norm\": 0.7961165048543689,\n \"acc_norm_stderr\": 0.039891398595317706\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8760683760683761,\n \"acc_stderr\": 0.02158649400128136,\n \"acc_norm\": 0.8760683760683761,\n \"acc_norm_stderr\": 0.02158649400128136\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.66,\n \"acc_stderr\": 0.04760952285695237,\n \"acc_norm\": 0.66,\n \"acc_norm_stderr\": 0.04760952285695237\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.80970625798212,\n \"acc_stderr\": 0.014036945850381398,\n \"acc_norm\": 0.80970625798212,\n \"acc_norm_stderr\": 0.014036945850381398\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6878612716763006,\n \"acc_stderr\": 0.024946792225272314,\n \"acc_norm\": 0.6878612716763006,\n \"acc_norm_stderr\": 0.024946792225272314\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3329608938547486,\n \"acc_stderr\": 0.015761716178397566,\n \"acc_norm\": 0.3329608938547486,\n \"acc_norm_stderr\": 0.015761716178397566\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7189542483660131,\n \"acc_stderr\": 0.025738854797818737,\n \"acc_norm\": 0.7189542483660131,\n \"acc_norm_stderr\": 0.025738854797818737\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.707395498392283,\n \"acc_stderr\": 0.02583989833487798,\n \"acc_norm\": 0.707395498392283,\n \"acc_norm_stderr\": 0.02583989833487798\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7129629629629629,\n \"acc_stderr\": 0.02517104191530968,\n \"acc_norm\": 0.7129629629629629,\n \"acc_norm_stderr\": 0.02517104191530968\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4929078014184397,\n \"acc_stderr\": 0.02982449855912901,\n \"acc_norm\": 0.4929078014184397,\n \"acc_norm_stderr\": 0.02982449855912901\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4511082138200782,\n \"acc_stderr\": 0.012709037347346233,\n \"acc_norm\": 0.4511082138200782,\n \"acc_norm_stderr\": 0.012709037347346233\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6323529411764706,\n \"acc_stderr\": 0.029289413409403192,\n \"acc_norm\": 0.6323529411764706,\n \"acc_norm_stderr\": 0.029289413409403192\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6584967320261438,\n \"acc_stderr\": 0.01918463932809249,\n \"acc_norm\": 0.6584967320261438,\n \"acc_norm_stderr\": 0.01918463932809249\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6454545454545455,\n \"acc_stderr\": 0.04582004841505417,\n \"acc_norm\": 0.6454545454545455,\n \"acc_norm_stderr\": 0.04582004841505417\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6979591836734694,\n \"acc_stderr\": 0.029393609319879804,\n \"acc_norm\": 0.6979591836734694,\n \"acc_norm_stderr\": 0.029393609319879804\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8407960199004975,\n \"acc_stderr\": 0.02587064676616913,\n \"acc_norm\": 0.8407960199004975,\n \"acc_norm_stderr\": 0.02587064676616913\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774709,\n \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.03684529491774709\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.536144578313253,\n \"acc_stderr\": 0.038823108508905954,\n \"acc_norm\": 0.536144578313253,\n \"acc_norm_stderr\": 0.038823108508905954\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8128654970760234,\n \"acc_stderr\": 0.029913127232368036,\n \"acc_norm\": 0.8128654970760234,\n \"acc_norm_stderr\": 0.029913127232368036\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.36964504283965727,\n \"mc1_stderr\": 0.016898180706973888,\n \"mc2\": 0.5384391963865467,\n \"mc2_stderr\": 0.015414673673859326\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7726913970007893,\n \"acc_stderr\": 0.011778612167091087\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.4313874147081122,\n \"acc_stderr\": 0.013642195352511575\n }\n}\n```", "repo_url": "https://huggingface.co/Azure99/blossom-v4-mistral-7b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|arc:challenge|25_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|gsm8k|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hellaswag|10_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-28T11-10-20.298869.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["**/details_harness|winogrande|5_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-28T11-10-20.298869.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_28T11_10_20.298869", "path": ["results_2023-12-28T11-10-20.298869.parquet"]}, {"split": "latest", "path": ["results_2023-12-28T11-10-20.298869.parquet"]}]}]}
2023-12-28T11:12:57+00:00
[]
[]
TAGS #region-us
# Dataset Card for Evaluation run of Azure99/blossom-v4-mistral-7b Dataset automatically created during the evaluation run of model Azure99/blossom-v4-mistral-7b on the Open LLM Leaderboard. The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task. The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results. An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard). To load the details from a run, you can for instance do the following: ## Latest results These are the latest results from run 2023-12-28T11:10:20.298869(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval): ## Dataset Details ### Dataset Description - Curated by: - Funded by [optional]: - Shared by [optional]: - Language(s) (NLP): - License: ### Dataset Sources [optional] - Repository: - Paper [optional]: - Demo [optional]: ## Uses ### Direct Use ### Out-of-Scope Use ## Dataset Structure ## Dataset Creation ### Curation Rationale ### Source Data #### Data Collection and Processing #### Who are the source data producers? ### Annotations [optional] #### Annotation process #### Who are the annotators? #### Personal and Sensitive Information ## Bias, Risks, and Limitations ### Recommendations Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations. [optional] BibTeX: APA: ## Glossary [optional] ## More Information [optional] ## Dataset Card Authors [optional] ## Dataset Card Contact
[ "# Dataset Card for Evaluation run of Azure99/blossom-v4-mistral-7b\n\n\n\nDataset automatically created during the evaluation run of model Azure99/blossom-v4-mistral-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-28T11:10:20.298869(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ "TAGS\n#region-us \n", "# Dataset Card for Evaluation run of Azure99/blossom-v4-mistral-7b\n\n\n\nDataset automatically created during the evaluation run of model Azure99/blossom-v4-mistral-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:", "## Latest results\n\nThese are the latest results from run 2023-12-28T11:10:20.298869(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):", "## Dataset Details", "### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:", "### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:", "## Uses", "### Direct Use", "### Out-of-Scope Use", "## Dataset Structure", "## Dataset Creation", "### Curation Rationale", "### Source Data", "#### Data Collection and Processing", "#### Who are the source data producers?", "### Annotations [optional]", "#### Annotation process", "#### Who are the annotators?", "#### Personal and Sensitive Information", "## Bias, Risks, and Limitations", "### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:", "## Glossary [optional]", "## More Information [optional]", "## Dataset Card Authors [optional]", "## Dataset Card Contact" ]
[ 6, 187, 66, 4, 40, 29, 3, 4, 9, 6, 5, 7, 4, 7, 10, 9, 5, 9, 8, 10, 46, 8, 7, 10, 5 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Azure99/blossom-v4-mistral-7b\n\n\n\nDataset automatically created during the evaluation run of model Azure99/blossom-v4-mistral-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-28T11:10:20.298869(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact" ]
7db955b5e6625bd803523667aca26f01fac10154
# Amazon ESCI hard-negatives dataset A dataset in a [nixietune](https://github.com/nixiesearch/nixietune) compatible format: ```json { { "query": "# cellist thats not a hashtag", "pos": [ "Funny Cellists That's Not A Hashtag Music Sweatshirt", "Marvel Deadpool Crunch Cereal Comics Funny Adult Men’s Graphic T-Shirt (Black, Medium)", "Womens Funny Cellists That's Not A Hashtag Music V-Neck T-Shirt", "Cellist Gift Orchestra Conductor Thats A Sharp Not A Hashtag T-Shirt", ], "neg": [ "Feelin Good Tees My Opinion Offended You Adult Humor T Shirt XL Black", "Christian Faith & Cross T-Shirt - Christian Faith T Shirts T-Shirt", "PLUS PLUS - 240 Piece Basic Mix - Construction Building Stem/Steam Toy, Mini Puzzle Blocks for Kids", "Caution I Learned to Drive Through Video Games - Funny Gamer T-Shirt", "People Who Tolerate Me On A Daily Basis T Shirt L Black", ] } ``` This is the expanded version of the [Amazon ESCI small-en](https://github.com/amazon-science/esci-data) dataset with the following additions: * for all queries, extra 32 negatives were genererated * negative generation was done with a [RRF](https://www.elastic.co/guide/en/elasticsearch/reference/current/rrf.html)-based hybrid search, mixing the BM25 score with cosine-similarity based on [intfloat/e5-base-v2] emnbedding model. * can be loaded with [HF datasets](https://huggingface.co/docs/datasets/index) directly. ## Usage ```python from datasets import load_dataset data = load_dataset('nixiesearch/amazon-esci-hardnegatives', split="train") ``` ## License Apache 2.0
nixiesearch/amazon-esci-hardnegatives
[ "task_categories:sentence-similarity", "size_categories:100K<n<1M", "source_datasets:Amazon ESCI", "language:en", "license:apache-2.0", "text", "region:us" ]
2023-12-28T11:28:29+00:00
{"language": ["en"], "license": "apache-2.0", "size_categories": ["100K<n<1M"], "source_datasets": ["Amazon ESCI"], "task_categories": ["sentence-similarity"], "pretty_name": "Amazon ESCI Hard Negatives", "tags": ["text"], "dataset_info": {"config_name": "default", "features": [{"name": "query", "dtype": "string"}, {"name": "positive", "sequence": "string"}, {"name": "negative", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 361711993, "num_examples": 74589}, {"name": "test", "num_bytes": 109820429, "num_examples": 22398}]}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train/*"}, {"split": "test", "path": "data/test/*"}]}], "train-eval-index": [{"config": "default", "task": "sentence-similarity", "splits": {"train_split": "train", "eval_split": "test"}}]}
2023-12-29T13:39:04+00:00
[]
[ "en" ]
TAGS #task_categories-sentence-similarity #size_categories-100K<n<1M #source_datasets-Amazon ESCI #language-English #license-apache-2.0 #text #region-us
# Amazon ESCI hard-negatives dataset A dataset in a nixietune compatible format: This is the expanded version of the Amazon ESCI small-en dataset with the following additions: * for all queries, extra 32 negatives were genererated * negative generation was done with a RRF-based hybrid search, mixing the BM25 score with cosine-similarity based on [intfloat/e5-base-v2] emnbedding model. * can be loaded with HF datasets directly. ## Usage ## License Apache 2.0
[ "# Amazon ESCI hard-negatives dataset\n\nA dataset in a nixietune compatible format:\n\n\n\nThis is the expanded version of the Amazon ESCI small-en dataset with the following additions:\n* for all queries, extra 32 negatives were genererated\n* negative generation was done with a RRF-based hybrid search, mixing the BM25 score with cosine-similarity based on [intfloat/e5-base-v2] emnbedding model.\n* can be loaded with HF datasets directly.", "## Usage", "## License\n\nApache 2.0" ]
[ "TAGS\n#task_categories-sentence-similarity #size_categories-100K<n<1M #source_datasets-Amazon ESCI #language-English #license-apache-2.0 #text #region-us \n", "# Amazon ESCI hard-negatives dataset\n\nA dataset in a nixietune compatible format:\n\n\n\nThis is the expanded version of the Amazon ESCI small-en dataset with the following additions:\n* for all queries, extra 32 negatives were genererated\n* negative generation was done with a RRF-based hybrid search, mixing the BM25 score with cosine-similarity based on [intfloat/e5-base-v2] emnbedding model.\n* can be loaded with HF datasets directly.", "## Usage", "## License\n\nApache 2.0" ]
[ 55, 116, 3, 5 ]
[ "passage: TAGS\n#task_categories-sentence-similarity #size_categories-100K<n<1M #source_datasets-Amazon ESCI #language-English #license-apache-2.0 #text #region-us \n# Amazon ESCI hard-negatives dataset\n\nA dataset in a nixietune compatible format:\n\n\n\nThis is the expanded version of the Amazon ESCI small-en dataset with the following additions:\n* for all queries, extra 32 negatives were genererated\n* negative generation was done with a RRF-based hybrid search, mixing the BM25 score with cosine-similarity based on [intfloat/e5-base-v2] emnbedding model.\n* can be loaded with HF datasets directly.## Usage## License\n\nApache 2.0" ]
4276288f6967365f9db55b497860d810b1f49ee3
# Dataset Card for "nitro_binarized" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
jan-hq/nitro_binarized
[ "region:us" ]
2023-12-28T12:23:45+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 1018426.9979852249, "num_examples": 2680}, {"name": "test", "num_bytes": 113243.00201477502, "num_examples": 298}], "download_size": 505174, "dataset_size": 1131670.0}}
2023-12-28T12:23:52+00:00
[]
[]
TAGS #region-us
# Dataset Card for "nitro_binarized" More Information needed
[ "# Dataset Card for \"nitro_binarized\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"nitro_binarized\"\n\nMore Information needed" ]
[ 6, 16 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"nitro_binarized\"\n\nMore Information needed" ]
27e3dca147ccb297b595495bc274fef523f120c2
# Dataset Card for "both_raw" [More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
tyzhu/both_raw
[ "region:us" ]
2023-12-28T12:35:00+00:00
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 522250, "num_examples": 1800}, {"name": "test", "num_bytes": 168860, "num_examples": 600}, {"name": "validation", "num_bytes": 168860, "num_examples": 600}], "download_size": 159230, "dataset_size": 859970}}
2023-12-28T12:35:08+00:00
[]
[]
TAGS #region-us
# Dataset Card for "both_raw" More Information needed
[ "# Dataset Card for \"both_raw\"\n\nMore Information needed" ]
[ "TAGS\n#region-us \n", "# Dataset Card for \"both_raw\"\n\nMore Information needed" ]
[ 6, 14 ]
[ "passage: TAGS\n#region-us \n# Dataset Card for \"both_raw\"\n\nMore Information needed" ]
ce2b702592d9757ed110d9725cb2f59fd2466e41
Creates a pages dataset using Wikipedia. Explores the 40 root categories and their sub-categories to collect pages. The produced dataset provides up to 2000 pages per category. See https://github.com/tarekziade/mwcat
tarekziade/wikipedia-topics
[ "license:cc-by-sa-4.0", "region:us" ]
2023-12-28T12:44:29+00:00
{"license": "cc-by-sa-4.0", "dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "categories", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 447696713.49705654, "num_examples": 67573}, {"name": "test", "num_bytes": 49749968.50294345, "num_examples": 7509}], "download_size": 298225345, "dataset_size": 497446682.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]}
2024-01-03T22:33:04+00:00
[]
[]
TAGS #license-cc-by-sa-4.0 #region-us
Creates a pages dataset using Wikipedia. Explores the 40 root categories and their sub-categories to collect pages. The produced dataset provides up to 2000 pages per category. See URL
[]
[ "TAGS\n#license-cc-by-sa-4.0 #region-us \n" ]
[ 17 ]
[ "passage: TAGS\n#license-cc-by-sa-4.0 #region-us \n" ]
dd912e7ffe0bf26c8b7e19565a7a683039c5c7bf
# Portuguese Hate Speech Expanded Dataset (TuPyE) TuPyE, an enhanced iteration of TuPy, encompasses a compilation of 43,668 meticulously annotated documents specifically selected for the purpose of hate speech detection within diverse social network contexts. This augmented dataset integrates supplementary annotations and amalgamates with datasets sourced from [Fortuna et al. (2019)](https://aclanthology.org/W19-3510/), [Leite et al. (2020)](https://arxiv.org/abs/2010.04543), and [Vargas et al. (2022)](https://arxiv.org/abs/2103.14972), complemented by an infusion of 10,000 original documents from the [TuPy-Dataset](https://huggingface.co/datasets/Silly-Machine/TuPy-Dataset). In light of the constrained availability of annotated data in Portuguese pertaining to the English language, TuPyE is committed to the expansion and enhancement of existing datasets. This augmentation serves to facilitate the development of advanced hate speech detection models through the utilization of machine learning (ML) and natural language processing (NLP) techniques. This repository is organized as follows: ```sh root. ├── binary : binary dataset (including training and testing split) ├── multilabel : multilabel dataset (including training and testing split) └── README.md : documentation and card metadata ``` We highly recommend reading the associated research paper [TuPy-E: detecting hate speech in Brazilian Portuguese social media with a novel dataset and comprehensive analysis of models](https://arxiv.org/abs/2312.17704) to gain comprehensive insights into the advancements integrated into this extended dataset. ## Security measures To safeguard user identity and uphold the integrity of this dataset, all user mentions have been anonymized as "@user," and any references to external websites have been omitted ## Annotation and voting process In the pursuit of advancing the field of automatic hate speech detection in Portuguese, our team undertook the meticulous task of creating a comprehensive database. This endeavor involved the integration of labeled document sets from seminal studies in the domain, specifically those conducted by Fortuna et al. (2019), Leite et al. (2020), and Vargas et al. (2022). To ensure the highest degree of consistency and compatibility within our dataset, we adhered to stringent guidelines for text integration, detailed as follows: 1. **Fortuna et al. (2019)**: This study presented a dataset of 5,670 tweets, each annotated by three independent evaluators to ascertain the presence or absence of hate speech. In our integration process, we adopted a simple majority-voting mechanism to classify each document, ensuring a consistent approach to hate speech identification across the dataset. 2. **Leite et al. (2020)**: The dataset from this research encompassed 21,000 tweets, annotated by 129 volunteers. Each tweet was reviewed by three different assessors. The study identified six categories of toxic speech, namely: (i) homophobia, (ii) racism, (iii) xenophobia, (iv) offensive language, (v) obscene language, and (vi) misogyny. In aligning with our operational definition of hate speech, we chose to exclude texts that solely fell under the categories of offensive and/or obscene language. Consistent with our methodology, a straightforward majority-voting process was utilized for the classification of these texts. 3. **Vargas et al**. (2022): This research involved a compilation of 7,000 comments sourced from Instagram, each labeled by a trio of annotators. These data had already been subjected to a simple majority-voting classification, thereby obviating the need for us to apply additional text classification protocols. Through the application of these rigorous integration guidelines, we have succeeded in establishing a robust, unified database that stands as a valuable resource for the development and refinement of automatic hate speech detection systems in the Portuguese language. ## Data structure A data point comprises the tweet text (a string) along with thirteen categories, each category is assigned a value of 0 when there is an absence of aggressive or hateful content and a value of 1 when such content is present. These values represent the consensus of annotators regarding the presence of aggressive, hate, ageism, aporophobia, body shame, capacitism, lgbtphobia, political, racism, religious intolerance, misogyny, xenophobia, and others. An illustration from the multilabel TuPyE dataset is depicted below: ```python { source:"twitter", text: "e tem pobre de direita imbecil que ainda defendia a manutenção da política de preços atrelada ao dólar link", researcher:"leite et al", year:2020, aggressive: 1, hate: 1, ageism: 0, aporophobia: 1, body shame: 0, capacitism: 0, lgbtphobia: 0, political: 1, racism : 0, religious intolerance : 0, misogyny : 0, xenophobia : 0, other : 0 } ``` # Dataset content The table 1 delineates the quantity of documents annotated in TuPyE, systematically categorized by the respective researchers. #### Table 1 - TuPyE composition | Label | Count |Source | |----------------------|--------|---------| | Leite et al. | 21,000 |Twitter | | TuPy | 10,000 |Twitter | | Vargas et al. | 7,000 |Instagram| | Fortuna et al. | 5,668 |Twitter | Table 2 provides a detailed breakdown of the dataset, delineating the volume of data based on the occurrence of aggressive speech and the manifestation of hate speech within the documents #### Table 2 - Count of non-aggressive and aggressive documents | Label | Count | |----------------------|--------| | Non-aggressive | 31121 | | Aggressive - Not hate| 3180 | | Aggressive - Hate | 9367 | | Total | 43668 | Table 3 provides a detailed analysis of the dataset, delineating the data volume in relation to the occurrence of distinct categories of hate speech. #### Table 3 - Hate categories count | Label | Count | |--------------------------|-------| | Ageism | 57 | | Aporophobia | 66 | | Body shame | 285 | | Capacitism | 99 | | LGBTphobia | 805 | | Political | 1149 | | Racism | 290 | | Religious intolerance | 108 | | Misogyny | 1675 | | Xenophobia | 357 | | Other | 4476 | | Total | 9367 | # Acknowledge The TuPy-E project is the result of the development of Felipe Oliveira's thesis and the work of several collaborators. This project is financed by the Federal University of Rio de Janeiro ([UFRJ](https://ufrj.br/)) and the Alberto Luiz Coimbra Institute for Postgraduate Studies and Research in Engineering ([COPPE](https://coppe.ufrj.br/)). # References [1] P. Fortuna, J. Rocha Da Silva, J. Soler-Company, L. Wanner, and S. Nunes, “A Hierarchically-Labeled Portuguese Hate Speech Dataset,” 2019. [Online]. Available: https://github.com/t-davidson/hate-s [2] J. A. Leite, D. F. Silva, K. Bontcheva, and C. Scarton, “Toxic Language Detection in Social Media for Brazilian Portuguese: New Dataset and Multilingual Analysis,” Oct. 2020, [Online]. Available: http://arxiv.org/abs/2010.04543 [3] F. Vargas, I. Carvalho, F. Góes, T. A. S. Pardo, and F. Benevenuto, “HateBR: A Large Expert Annotated Corpus of Brazilian Instagram Comments for Offensive Language and Hate Speech Detection,” 2022. [Online]. Available: https://aclanthology.org/2022.lrec-1.777/
Silly-Machine/TuPyE-Dataset
[ "task_categories:text-classification", "annotations_creators:crowdsourced", "language_creators:crowdsourced", "multilinguality:monolingual", "size_categories:10K<n<100K", "source_datasets:crowdsourced", "language:pt", "license:cc-by-4.0", "hate-speech-detection", "arxiv:2010.04543", "arxiv:2103.14972", "arxiv:2312.17704", "region:us" ]
2023-12-28T12:46:12+00:00
{"annotations_creators": ["crowdsourced"], "language_creators": ["crowdsourced"], "language": ["pt"], "license": "cc-by-4.0", "multilinguality": ["monolingual"], "size_categories": ["10K<n<100K"], "source_datasets": ["crowdsourced"], "task_categories": ["text-classification"], "task_ids": [], "pretty_name": "TuPy-Dataset", "language_bcp47": ["pt-BR"], "tags": ["hate-speech-detection"], "configs": [{"config_name": "multilabel", "data_files": [{"split": "train", "path": "multilabel/multilabel_train.csv"}, {"split": "test", "path": "multilabel/multilabel_test.csv"}]}, {"config_name": "binary", "data_files": [{"split": "train", "path": "binary/binary_train.csv"}, {"split": "test", "path": "binary/binary_test.csv"}]}]}
2024-01-01T14:47:42+00:00
[ "2010.04543", "2103.14972", "2312.17704" ]
[ "pt" ]
TAGS #task_categories-text-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-crowdsourced #language-Portuguese #license-cc-by-4.0 #hate-speech-detection #arxiv-2010.04543 #arxiv-2103.14972 #arxiv-2312.17704 #region-us
Portuguese Hate Speech Expanded Dataset (TuPyE) =============================================== TuPyE, an enhanced iteration of TuPy, encompasses a compilation of 43,668 meticulously annotated documents specifically selected for the purpose of hate speech detection within diverse social network contexts. This augmented dataset integrates supplementary annotations and amalgamates with datasets sourced from Fortuna et al. (2019), Leite et al. (2020), and Vargas et al. (2022), complemented by an infusion of 10,000 original documents from the TuPy-Dataset. In light of the constrained availability of annotated data in Portuguese pertaining to the English language, TuPyE is committed to the expansion and enhancement of existing datasets. This augmentation serves to facilitate the development of advanced hate speech detection models through the utilization of machine learning (ML) and natural language processing (NLP) techniques. This repository is organized as follows: We highly recommend reading the associated research paper TuPy-E: detecting hate speech in Brazilian Portuguese social media with a novel dataset and comprehensive analysis of models to gain comprehensive insights into the advancements integrated into this extended dataset. Security measures ----------------- To safeguard user identity and uphold the integrity of this dataset, all user mentions have been anonymized as "@user," and any references to external websites have been omitted Annotation and voting process ----------------------------- In the pursuit of advancing the field of automatic hate speech detection in Portuguese, our team undertook the meticulous task of creating a comprehensive database. This endeavor involved the integration of labeled document sets from seminal studies in the domain, specifically those conducted by Fortuna et al. (2019), Leite et al. (2020), and Vargas et al. (2022). To ensure the highest degree of consistency and compatibility within our dataset, we adhered to stringent guidelines for text integration, detailed as follows: 1. Fortuna et al. (2019): This study presented a dataset of 5,670 tweets, each annotated by three independent evaluators to ascertain the presence or absence of hate speech. In our integration process, we adopted a simple majority-voting mechanism to classify each document, ensuring a consistent approach to hate speech identification across the dataset. 2. Leite et al. (2020): The dataset from this research encompassed 21,000 tweets, annotated by 129 volunteers. Each tweet was reviewed by three different assessors. The study identified six categories of toxic speech, namely: (i) homophobia, (ii) racism, (iii) xenophobia, (iv) offensive language, (v) obscene language, and (vi) misogyny. In aligning with our operational definition of hate speech, we chose to exclude texts that solely fell under the categories of offensive and/or obscene language. Consistent with our methodology, a straightforward majority-voting process was utilized for the classification of these texts. 3. Vargas et al. (2022): This research involved a compilation of 7,000 comments sourced from Instagram, each labeled by a trio of annotators. These data had already been subjected to a simple majority-voting classification, thereby obviating the need for us to apply additional text classification protocols. Through the application of these rigorous integration guidelines, we have succeeded in establishing a robust, unified database that stands as a valuable resource for the development and refinement of automatic hate speech detection systems in the Portuguese language. Data structure -------------- A data point comprises the tweet text (a string) along with thirteen categories, each category is assigned a value of 0 when there is an absence of aggressive or hateful content and a value of 1 when such content is present. These values represent the consensus of annotators regarding the presence of aggressive, hate, ageism, aporophobia, body shame, capacitism, lgbtphobia, political, racism, religious intolerance, misogyny, xenophobia, and others. An illustration from the multilabel TuPyE dataset is depicted below: Dataset content =============== The table 1 delineates the quantity of documents annotated in TuPyE, systematically categorized by the respective researchers. #### Table 1 - TuPyE composition Label: Leite et al., Count: 21,000, Source: Twitter Label: TuPy, Count: 10,000, Source: Twitter Label: Vargas et al., Count: 7,000, Source: Instagram Label: Fortuna et al., Count: 5,668, Source: Twitter Table 2 provides a detailed breakdown of the dataset, delineating the volume of data based on the occurrence of aggressive speech and the manifestation of hate speech within the documents #### Table 2 - Count of non-aggressive and aggressive documents Table 3 provides a detailed analysis of the dataset, delineating the data volume in relation to the occurrence of distinct categories of hate speech. #### Table 3 - Hate categories count Acknowledge =========== The TuPy-E project is the result of the development of Felipe Oliveira's thesis and the work of several collaborators. This project is financed by the Federal University of Rio de Janeiro (UFRJ) and the Alberto Luiz Coimbra Institute for Postgraduate Studies and Research in Engineering (COPPE). References ========== [1] P. Fortuna, J. Rocha Da Silva, J. Soler-Company, L. Wanner, and S. Nunes, “A Hierarchically-Labeled Portuguese Hate Speech Dataset,” 2019. [Online]. Available: URL [2] J. A. Leite, D. F. Silva, K. Bontcheva, and C. Scarton, “Toxic Language Detection in Social Media for Brazilian Portuguese: New Dataset and Multilingual Analysis,” Oct. 2020, [Online]. Available: URL [3] F. Vargas, I. Carvalho, F. Góes, T. A. S. Pardo, and F. Benevenuto, “HateBR: A Large Expert Annotated Corpus of Brazilian Instagram Comments for Offensive Language and Hate Speech Detection,” 2022. [Online]. Available: URL
[ "#### Table 1 - TuPyE composition\n\n\nLabel: Leite et al., Count: 21,000, Source: Twitter\nLabel: TuPy, Count: 10,000, Source: Twitter\nLabel: Vargas et al., Count: 7,000, Source: Instagram\nLabel: Fortuna et al., Count: 5,668, Source: Twitter\n\n\nTable 2 provides a detailed breakdown of the dataset, delineating the volume of data based on the occurrence of aggressive speech and the manifestation of hate speech within the documents", "#### Table 2 - Count of non-aggressive and aggressive documents\n\n\n\nTable 3 provides a detailed analysis of the dataset, delineating the data volume in relation to the occurrence of distinct categories of hate speech.", "#### Table 3 - Hate categories count\n\n\n\nAcknowledge\n===========\n\n\nThe TuPy-E project is the result of the development of Felipe Oliveira's thesis and the work of several collaborators. This project is financed by the Federal University of Rio de Janeiro (UFRJ) and the Alberto Luiz Coimbra Institute for Postgraduate Studies and Research in Engineering (COPPE).\n\n\nReferences\n==========\n\n\n[1] P. Fortuna, J. Rocha Da Silva, J. Soler-Company, L. Wanner, and S. Nunes, “A Hierarchically-Labeled Portuguese Hate Speech Dataset,” 2019. [Online]. Available: URL\n\n\n[2] J. A. Leite, D. F. Silva, K. Bontcheva, and C. Scarton, “Toxic Language Detection in Social Media for Brazilian Portuguese: New Dataset and Multilingual Analysis,” Oct. 2020, [Online]. Available: URL\n\n\n[3] F. Vargas, I. Carvalho, F. Góes, T. A. S. Pardo, and F. Benevenuto, “HateBR: A Large Expert Annotated Corpus of Brazilian Instagram Comments for Offensive Language and Hate Speech Detection,” 2022. [Online]. Available: URL" ]
[ "TAGS\n#task_categories-text-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-crowdsourced #language-Portuguese #license-cc-by-4.0 #hate-speech-detection #arxiv-2010.04543 #arxiv-2103.14972 #arxiv-2312.17704 #region-us \n", "#### Table 1 - TuPyE composition\n\n\nLabel: Leite et al., Count: 21,000, Source: Twitter\nLabel: TuPy, Count: 10,000, Source: Twitter\nLabel: Vargas et al., Count: 7,000, Source: Instagram\nLabel: Fortuna et al., Count: 5,668, Source: Twitter\n\n\nTable 2 provides a detailed breakdown of the dataset, delineating the volume of data based on the occurrence of aggressive speech and the manifestation of hate speech within the documents", "#### Table 2 - Count of non-aggressive and aggressive documents\n\n\n\nTable 3 provides a detailed analysis of the dataset, delineating the data volume in relation to the occurrence of distinct categories of hate speech.", "#### Table 3 - Hate categories count\n\n\n\nAcknowledge\n===========\n\n\nThe TuPy-E project is the result of the development of Felipe Oliveira's thesis and the work of several collaborators. This project is financed by the Federal University of Rio de Janeiro (UFRJ) and the Alberto Luiz Coimbra Institute for Postgraduate Studies and Research in Engineering (COPPE).\n\n\nReferences\n==========\n\n\n[1] P. Fortuna, J. Rocha Da Silva, J. Soler-Company, L. Wanner, and S. Nunes, “A Hierarchically-Labeled Portuguese Hate Speech Dataset,” 2019. [Online]. Available: URL\n\n\n[2] J. A. Leite, D. F. Silva, K. Bontcheva, and C. Scarton, “Toxic Language Detection in Social Media for Brazilian Portuguese: New Dataset and Multilingual Analysis,” Oct. 2020, [Online]. Available: URL\n\n\n[3] F. Vargas, I. Carvalho, F. Góes, T. A. S. Pardo, and F. Benevenuto, “HateBR: A Large Expert Annotated Corpus of Brazilian Instagram Comments for Offensive Language and Hate Speech Detection,” 2022. [Online]. Available: URL" ]
[ 121, 113, 49, 285 ]
[ "passage: TAGS\n#task_categories-text-classification #annotations_creators-crowdsourced #language_creators-crowdsourced #multilinguality-monolingual #size_categories-10K<n<100K #source_datasets-crowdsourced #language-Portuguese #license-cc-by-4.0 #hate-speech-detection #arxiv-2010.04543 #arxiv-2103.14972 #arxiv-2312.17704 #region-us \n#### Table 1 - TuPyE composition\n\n\nLabel: Leite et al., Count: 21,000, Source: Twitter\nLabel: TuPy, Count: 10,000, Source: Twitter\nLabel: Vargas et al., Count: 7,000, Source: Instagram\nLabel: Fortuna et al., Count: 5,668, Source: Twitter\n\n\nTable 2 provides a detailed breakdown of the dataset, delineating the volume of data based on the occurrence of aggressive speech and the manifestation of hate speech within the documents#### Table 2 - Count of non-aggressive and aggressive documents\n\n\n\nTable 3 provides a detailed analysis of the dataset, delineating the data volume in relation to the occurrence of distinct categories of hate speech." ]