sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
ca5a7eb97bc452ed6608881dacc6b915bc03a364 | # Dataset Card for "small_division_decimal"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jlbaker361/small_division_decimal | [
"region:us"
]
| 2023-11-17T04:47:43+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "float64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1589.3333333333333, "num_examples": 32}, {"name": "test", "num_bytes": 198.66666666666666, "num_examples": 4}], "download_size": 4415, "dataset_size": 1788.0}} | 2023-11-17T05:53:57+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "small_division_decimal"
More Information needed | [
"# Dataset Card for \"small_division_decimal\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"small_division_decimal\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"small_division_decimal\"\n\nMore Information needed"
]
|
812ffcd26fc6e6da16179176dcfcfe30b9c41529 | # Dataset Card for "small_subtraction_decimal"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jlbaker361/small_subtraction_decimal | [
"region:us"
]
| 2023-11-17T04:47:44+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "float64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2030.2222222222222, "num_examples": 40}, {"name": "test", "num_bytes": 253.77777777777777, "num_examples": 5}], "download_size": 4553, "dataset_size": 2284.0}} | 2023-11-17T05:53:58+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "small_subtraction_decimal"
More Information needed | [
"# Dataset Card for \"small_subtraction_decimal\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"small_subtraction_decimal\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"small_subtraction_decimal\"\n\nMore Information needed"
]
|
7050a55ae2c716bad8fe3deb0005fdc74f0f1aa8 | # Dataset Card for "small_addition_decimal"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jlbaker361/small_addition_decimal | [
"region:us"
]
| 2023-11-17T04:47:46+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "float64"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1827.5555555555557, "num_examples": 40}, {"name": "test", "num_bytes": 228.44444444444446, "num_examples": 5}], "download_size": 4479, "dataset_size": 2056.0}} | 2023-11-17T05:54:00+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "small_addition_decimal"
More Information needed | [
"# Dataset Card for \"small_addition_decimal\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"small_addition_decimal\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"small_addition_decimal\"\n\nMore Information needed"
]
|
b32310b23bf03aac068048f7ec067383febd7374 | This dataset is comprised of roleplay chat conversations scraped from several Discord RP fandom servers. The conversations have been split in terms of days, the assumption being that a majority of long-form roleplays are started/continued and completed within a day.
The original dataset consisted of ~90K samples. Light filtering striped that down to ~18K samples. Stricter filtering striped it down to ~8k samples. Strictest filtering striped it down to ~2k samples.
Some effort was made to remove OOC, links, and other miscellanous fluff, but more work still needs to be done. This isn't a "completed" dataset so much as a test to see if the data gathered is conducive to training LLMs for roleplay purposes. If determined to be useful, I will continue to scrape more data.
In here are several files:
* `discord_rp_with_token_counts.json` - The original dataset in all its unprocessed glory. ~90k items. Total Average Token Length for all items: ~164.
* `125_tokens_10_messages_discord_rp.json` (Strictest) - Original dataset filtered for an average token length of 125 and a minimum conversation length of 10 messages. Mostly unprocessed.
* `80_tokens_6_messages_discord_rp.json` (Stricter) - Original dataset filtered for an average token length of 80 tokens and a minimum conversation length of 6 messages. Mostly unprocessed. The latter contains the former, so use one or the other, but not both.
* `80_tokens_3_messages_discord_rp.json` (Light) - Original dataset filtered for an average token length of 80 tokens and a minimum conversation length of 3 messages. Mostly unprocessed. The latter contains the former, so use one or the other, but not both.
* `opencai_rp.json` - Original dataset filtered for an average token length of 125 tokens and a minimum conversation length of 10 messages, then processed. Contains descriptions of characters, summary, scene, and genre tags provided by `gpt-3.5-turbo-16k`.
* `opencai_rp_metharme.json` - Original dataset filtered for an average token length of 125 tokens and a minimum conversation length of 10 messages, then processed, filtered to 1229 samples, and converted to metharme format.
Explanation of Properties:
* `timestamp` - Date of the interaction in YYYY-MM-DD format
* `conversations`: The conversation between the users in the chat. This is represented as a list of dictionaries, each dictionary representing a single utterance and containing three key-value pairs: `message`, referring to the utterance itself, `author` referring to their Discord username, and `is_bot`, which designates whether the message was sent by a human or a bot. `is_bot` was determined by checking if author still had a discriminator and, therefore, isn't 100% accurate. | Norquinal/OpenCAI | [
"size_categories:10K<n<100K",
"language:en",
"license:cc-by-nc-4.0",
"region:us"
]
| 2023-11-17T04:54:18+00:00 | {"language": ["en"], "license": "cc-by-nc-4.0", "size_categories": ["10K<n<100K"]} | 2024-02-16T14:20:36+00:00 | []
| [
"en"
]
| TAGS
#size_categories-10K<n<100K #language-English #license-cc-by-nc-4.0 #region-us
| This dataset is comprised of roleplay chat conversations scraped from several Discord RP fandom servers. The conversations have been split in terms of days, the assumption being that a majority of long-form roleplays are started/continued and completed within a day.
The original dataset consisted of ~90K samples. Light filtering striped that down to ~18K samples. Stricter filtering striped it down to ~8k samples. Strictest filtering striped it down to ~2k samples.
Some effort was made to remove OOC, links, and other miscellanous fluff, but more work still needs to be done. This isn't a "completed" dataset so much as a test to see if the data gathered is conducive to training LLMs for roleplay purposes. If determined to be useful, I will continue to scrape more data.
In here are several files:
* 'discord_rp_with_token_counts.json' - The original dataset in all its unprocessed glory. ~90k items. Total Average Token Length for all items: ~164.
* '125_tokens_10_messages_discord_rp.json' (Strictest) - Original dataset filtered for an average token length of 125 and a minimum conversation length of 10 messages. Mostly unprocessed.
* '80_tokens_6_messages_discord_rp.json' (Stricter) - Original dataset filtered for an average token length of 80 tokens and a minimum conversation length of 6 messages. Mostly unprocessed. The latter contains the former, so use one or the other, but not both.
* '80_tokens_3_messages_discord_rp.json' (Light) - Original dataset filtered for an average token length of 80 tokens and a minimum conversation length of 3 messages. Mostly unprocessed. The latter contains the former, so use one or the other, but not both.
* 'opencai_rp.json' - Original dataset filtered for an average token length of 125 tokens and a minimum conversation length of 10 messages, then processed. Contains descriptions of characters, summary, scene, and genre tags provided by 'gpt-3.5-turbo-16k'.
* 'opencai_rp_metharme.json' - Original dataset filtered for an average token length of 125 tokens and a minimum conversation length of 10 messages, then processed, filtered to 1229 samples, and converted to metharme format.
Explanation of Properties:
* 'timestamp' - Date of the interaction in YYYY-MM-DD format
* 'conversations': The conversation between the users in the chat. This is represented as a list of dictionaries, each dictionary representing a single utterance and containing three key-value pairs: 'message', referring to the utterance itself, 'author' referring to their Discord username, and 'is_bot', which designates whether the message was sent by a human or a bot. 'is_bot' was determined by checking if author still had a discriminator and, therefore, isn't 100% accurate. | []
| [
"TAGS\n#size_categories-10K<n<100K #language-English #license-cc-by-nc-4.0 #region-us \n"
]
| [
33
]
| [
"passage: TAGS\n#size_categories-10K<n<100K #language-English #license-cc-by-nc-4.0 #region-us \n"
]
|
5572622970c659e1b05312775e329727b1ca7f34 | # Dataset Card for "nuclear_patents"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arcee-ai/nuclear_patents | [
"region:us"
]
| 2023-11-17T05:04:52+00:00 | {"dataset_info": {"features": [{"name": "patent_number", "dtype": "string"}, {"name": "section", "dtype": "string"}, {"name": "raw_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 350035355.37046283, "num_examples": 33523}, {"name": "test", "num_bytes": 38895137.62953716, "num_examples": 3725}], "download_size": 151011439, "dataset_size": 388930493.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]} | 2023-12-19T03:40:35+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "nuclear_patents"
More Information needed | [
"# Dataset Card for \"nuclear_patents\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"nuclear_patents\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"nuclear_patents\"\n\nMore Information needed"
]
|
7393d620487b9d53fcc7847a031b51b80fa2a47d | # Dataset Card for "miniimagenet_caption"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cherry0324/miniimagenet_caption | [
"region:us"
]
| 2023-11-17T05:27:24+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2506731980.0, "num_examples": 50000}], "download_size": 1288083278, "dataset_size": 2506731980.0}} | 2023-11-17T15:51:13+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "miniimagenet_caption"
More Information needed | [
"# Dataset Card for \"miniimagenet_caption\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"miniimagenet_caption\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"miniimagenet_caption\"\n\nMore Information needed"
]
|
3c07df5911d4cf080011c83c2ffd42546a7aeaa9 | # Kesalahan Tatabahasa Choice
Notebook at https://github.com/mesolitica/malaysian-dataset/tree/master/tatabahasa/qa-choice | mesolitica/kesalahan-tatabahasa-choice | [
"language:ms",
"license:mit",
"region:us"
]
| 2023-11-17T05:29:44+00:00 | {"language": ["ms"], "license": "mit"} | 2024-01-21T11:45:18+00:00 | []
| [
"ms"
]
| TAGS
#language-Malay (macrolanguage) #license-mit #region-us
| # Kesalahan Tatabahasa Choice
Notebook at URL | [
"# Kesalahan Tatabahasa Choice\n\nNotebook at URL"
]
| [
"TAGS\n#language-Malay (macrolanguage) #license-mit #region-us \n",
"# Kesalahan Tatabahasa Choice\n\nNotebook at URL"
]
| [
21,
10
]
| [
"passage: TAGS\n#language-Malay (macrolanguage) #license-mit #region-us \n# Kesalahan Tatabahasa Choice\n\nNotebook at URL"
]
|
78d854c34004424c4d6b38545593026bb3e3846f | # CodeFuseEval
## Dataset Description
[CodeFuseEval](https://github.com/codefuse-ai/codefuse-evaluation) is a benchmark for evaluating the multilingual ability of code generative models. It consists of 820 high-quality human-crafted data samples (each with test cases) in Python, C++, Java, JavaScript, and Go, and can be used for various tasks, such as code generation and translation.
## Languages
The dataset contains coding problems in 4 programming tasks: CodeCompletion, NL2Code, CodeTranslation and CodeDataScience.
## Dataset Structure
To load the dataset you need to specify a subset name among the subdataset names
```['humaneval_python', 'humaneval_python_cn', 'humaneval_js', 'humaneval_java', 'humaneval_go', 'humaneval_rust', 'humaneval_cpp', 'mbpp', 'codeTrans_python_to_java', 'codeTrans_python_to_cpp', 'codeTrans_cpp_to_java', 'codeTrans_cpp_to_python', 'codeTrans_java_to_python', 'codeTrans_java_to_cpp', 'codeCompletion_matplotlib', 'codeCompletion_numpy', 'codeCompletion_pandas', 'codeCompletion_pytorch', 'codeCompletion_scipy', 'codeCompletion_sklearn', 'codeCompletion_tensorflow', 'codeInsertion_matplotlib', 'codeInsertion_numpy', 'codeInsertion_pandas', 'codeInsertion_pytorch', 'codeInsertion_scipy', 'codeInsertion_sklearn', 'codeInsertion_tensorflow']```.
By default `humaneval_python` is loaded.
```python
from datasets import load_dataset
load_dataset("codefuse-ai/CodeFuseEval", "humaneval_python")
```
## Data Fields
Different subdataset has different fields. You can check the fields of each subdataset by calling `dataset["test"].features`. For example, for `humaneval_python`:
* ``task_id``: indicates the target language and ID of the problem. Language is one of ["Python", "Java", "JavaScript", "CPP", "Go"].
* ``prompt``: the function declaration and docstring, used for code generation.
* ``declaration``: only the function declaration, used for code translation.
* ``canonical_solution``: human-crafted example solutions.
* ``test``: hidden test samples, used for evaluation.
* ``example_test``: public test samples (appeared in prompt), used for evaluation.
## Data Splits
Each subset has one split: test.
## Citation Information
Refer to https://github.com/codefuse-ai/codefuse-evaluation. | codefuse-ai/CodeFuseEval | [
"region:us"
]
| 2023-11-17T06:24:55+00:00 | {} | 2023-11-20T02:51:20+00:00 | []
| []
| TAGS
#region-us
| # CodeFuseEval
## Dataset Description
CodeFuseEval is a benchmark for evaluating the multilingual ability of code generative models. It consists of 820 high-quality human-crafted data samples (each with test cases) in Python, C++, Java, JavaScript, and Go, and can be used for various tasks, such as code generation and translation.
## Languages
The dataset contains coding problems in 4 programming tasks: CodeCompletion, NL2Code, CodeTranslation and CodeDataScience.
## Dataset Structure
To load the dataset you need to specify a subset name among the subdataset names
.
By default 'humaneval_python' is loaded.
## Data Fields
Different subdataset has different fields. You can check the fields of each subdataset by calling 'dataset["test"].features'. For example, for 'humaneval_python':
* ''task_id'': indicates the target language and ID of the problem. Language is one of ["Python", "Java", "JavaScript", "CPP", "Go"].
* ''prompt'': the function declaration and docstring, used for code generation.
* ''declaration'': only the function declaration, used for code translation.
* ''canonical_solution'': human-crafted example solutions.
* ''test'': hidden test samples, used for evaluation.
* ''example_test'': public test samples (appeared in prompt), used for evaluation.
## Data Splits
Each subset has one split: test.
Refer to URL | [
"# CodeFuseEval",
"## Dataset Description\nCodeFuseEval is a benchmark for evaluating the multilingual ability of code generative models. It consists of 820 high-quality human-crafted data samples (each with test cases) in Python, C++, Java, JavaScript, and Go, and can be used for various tasks, such as code generation and translation.",
"## Languages\n\nThe dataset contains coding problems in 4 programming tasks: CodeCompletion, NL2Code, CodeTranslation and CodeDataScience.",
"## Dataset Structure\nTo load the dataset you need to specify a subset name among the subdataset names \n. \nBy default 'humaneval_python' is loaded.",
"## Data Fields\nDifferent subdataset has different fields. You can check the fields of each subdataset by calling 'dataset[\"test\"].features'. For example, for 'humaneval_python':\n* ''task_id'': indicates the target language and ID of the problem. Language is one of [\"Python\", \"Java\", \"JavaScript\", \"CPP\", \"Go\"].\n* ''prompt'': the function declaration and docstring, used for code generation.\n* ''declaration'': only the function declaration, used for code translation. \n* ''canonical_solution'': human-crafted example solutions.\n* ''test'': hidden test samples, used for evaluation.\n* ''example_test'': public test samples (appeared in prompt), used for evaluation.",
"## Data Splits\n\nEach subset has one split: test.\n\n\n\nRefer to URL"
]
| [
"TAGS\n#region-us \n",
"# CodeFuseEval",
"## Dataset Description\nCodeFuseEval is a benchmark for evaluating the multilingual ability of code generative models. It consists of 820 high-quality human-crafted data samples (each with test cases) in Python, C++, Java, JavaScript, and Go, and can be used for various tasks, such as code generation and translation.",
"## Languages\n\nThe dataset contains coding problems in 4 programming tasks: CodeCompletion, NL2Code, CodeTranslation and CodeDataScience.",
"## Dataset Structure\nTo load the dataset you need to specify a subset name among the subdataset names \n. \nBy default 'humaneval_python' is loaded.",
"## Data Fields\nDifferent subdataset has different fields. You can check the fields of each subdataset by calling 'dataset[\"test\"].features'. For example, for 'humaneval_python':\n* ''task_id'': indicates the target language and ID of the problem. Language is one of [\"Python\", \"Java\", \"JavaScript\", \"CPP\", \"Go\"].\n* ''prompt'': the function declaration and docstring, used for code generation.\n* ''declaration'': only the function declaration, used for code translation. \n* ''canonical_solution'': human-crafted example solutions.\n* ''test'': hidden test samples, used for evaluation.\n* ''example_test'': public test samples (appeared in prompt), used for evaluation.",
"## Data Splits\n\nEach subset has one split: test.\n\n\n\nRefer to URL"
]
| [
6,
6,
77,
36,
42,
189,
16
]
| [
"passage: TAGS\n#region-us \n# CodeFuseEval## Dataset Description\nCodeFuseEval is a benchmark for evaluating the multilingual ability of code generative models. It consists of 820 high-quality human-crafted data samples (each with test cases) in Python, C++, Java, JavaScript, and Go, and can be used for various tasks, such as code generation and translation.## Languages\n\nThe dataset contains coding problems in 4 programming tasks: CodeCompletion, NL2Code, CodeTranslation and CodeDataScience.## Dataset Structure\nTo load the dataset you need to specify a subset name among the subdataset names \n. \nBy default 'humaneval_python' is loaded.## Data Fields\nDifferent subdataset has different fields. You can check the fields of each subdataset by calling 'dataset[\"test\"].features'. For example, for 'humaneval_python':\n* ''task_id'': indicates the target language and ID of the problem. Language is one of [\"Python\", \"Java\", \"JavaScript\", \"CPP\", \"Go\"].\n* ''prompt'': the function declaration and docstring, used for code generation.\n* ''declaration'': only the function declaration, used for code translation. \n* ''canonical_solution'': human-crafted example solutions.\n* ''test'': hidden test samples, used for evaluation.\n* ''example_test'': public test samples (appeared in prompt), used for evaluation.## Data Splits\n\nEach subset has one split: test.\n\n\n\nRefer to URL"
]
|
b5bec0d01ddc1cbd8b6645d58779096266d86277 | # Dataset Card for "data_part01"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | phanvancongthanh/data_part01 | [
"region:us"
]
| 2023-11-17T06:42:57+00:00 | {"dataset_info": {"features": [{"name": "smiles", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5569879057, "num_examples": 145029701}], "download_size": 2986735798, "dataset_size": 5569879057}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-17T06:50:01+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "data_part01"
More Information needed | [
"# Dataset Card for \"data_part01\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"data_part01\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"data_part01\"\n\nMore Information needed"
]
|
95295e65dac8aeb42c9b6abe14123ffd65808e6c |
KAgentBench is the benchmark proposed in KwaiAgents ([Github](https://github.com/KwaiKEG/KwaiAgents)), which is a series of Agent-related works open-sourced by the [KwaiKEG](https://github.com/KwaiKEG) from [Kuaishou Technology](https://www.kuaishou.com/en). It contains over 3,000 human-edited, automated evaluation data for testing Agent capabilities, with evaluation dimensions including planning, tool-use, reflection, concluding, and profiling.
## Overall statistics of KAgentBench
---
| type| #Queries | #Inst | Avg. #Ground | Avg. #Tools | Avg. #Turns | Avg. #Tasks | Avg. Len-Know | Metric |
| :-------: | :-------:| :-------: | :-------: | :-------: | :-------: | :-------: | :-------: | :-------: |
| Planning & Tool-use | 320 | 1,317 | 4.12 | 8.68 | 1.51 | 2.21 | 245.31 | ROUGE-L, EM |
| Reflection | 68 | 272 | 4 | 12 | 1 | 3.97 | 1369.04 | ROUGE-L, EM |
| Concluding | 245 | 1,225 | 5 | - | 1.52 | 2.14 | 923.96 | ROUGE-L |
| Profile | 433 | 433 | 5 | - | 1.99 | - | - | ROUGE-L |
## Experimental results of different LLMs on KAgentBench
---
The specific performance of different models on benchmarks can be seen in more detail in our [paper](https://arxiv.org/abs/2312.04889).
| | Scale | Planning | Tool-use | Reflection | Concluding | Profile | Overall Score |
|----------------|-------|----------|----------|------------|------------|---------|---------------|
| GPT-3.5-turbo | - | 18.55 | 15.89 | 5.32 | 37.26 | 35.42 | 21.72 |
| Llama2 | 13B | 0.15 | 0.23 | 0.08 | 16.60 | 17.73 | 5.22 |
| ChatGLM3 | 6B | 7.87 | 6.82 | 4.49 | 30.01 | 30.14 | 13.82 |
| Qwen | 7B | 13.34 | 10.87 | 4.73 | 36.24 | 34.99 | 18.36 |
| Baichuan2 | 13B | 6.70 | 10.11 | 4.25 | 24.97 | 19.08 | 12.54 |
| ToolLlama | 7B | 0.20 | 3.44 | 0.54 | 15.62 | 10.66 | 5.50 |
| AgentLM | 13B | 0.17 | 0.09 | 0.05 | 16.30 | 15.22 | 4.86 |
| Qwen-MAT | 7B | 31.64 | 28.26 | 29.50 | 44.85 | 44.78 | 34.20 |
| Baichuan2-MAT | 13B | 37.27 | 34.82 | 32.06 | 48.01 | 41.83 | 38.49 |
## JSON Format
---
Each data point is
a dict with the following keys:
- `id`: a unique id for this data point. This is useful for evaluation.
- `query`: a string.
- `type`: a string, the type of this data(plantooluse,reflextion,conclusion,profile).
- `golden_result_list`: a list. The reference response.
- `funcs`: a list of functions that may be used in the current query
- `prompt_input`: a dict,input composed of different prompt templates
- `memory`: a string
- `memory_type`: a string,types of memory: task, knowledge, conversation
- `memory_last_task`: a list, in the case where memory is task, the last task information in the previous round
The overall data format is as follows
```json
{
"id": "",
"query": "",
"type": "",
"golden_result_list": [],
"funcs": [],
"prompt_input": {},
"memory": "",
"memory_type": "",
"memory_last_task": {}
}
```
## How to download benchmark
---
You can download the benchmark evaluation set through [kwaikeg/KAgentBench](https://huggingface.co/datasets/kwaikeg/KAgentBench/tree/main), or you can also download the benchmark evaluation set on [KwaiAgents](https://github.com/KwaiKEG/KwaiAgents).
The filename of the evaluation set is 'benchmark_eval.jsonl'. Download the file to your local system.
## Environment Setup
---
Please make sure you have setup the environment and installed the required packages. Make sure you meet the above requirements, and then install the dependent libraries.
```bash
pip install -r requirements.txt
```
## Benchmark Inference
To run benchmark evaluations using different models, it is necessary to appropriately load and predict according to the model in the inference script. Different models may have variations in their initialization and loading methods. We have provided inference scripts for both the Qianwen and Baichuan models. Inference for other models can be adapted based on the scripts we have provided.
Taking the provided open-source model kagentlms_qwen_7b_mat as an example.(The kagentlms_qwen_7b_mat model has been uploaded to [huggingface](https://huggingface.co/kwaikeg/kagentlms_qwen_7b_mat))
Run the script to perform benchmark predictions. The parameter is the prediction results file.
```bash
python infer_qwen.py qwen_benchmark_res.jsonl
```
Similarly, to perform evaluations using the 'kagentlms_baichuan2_13b_mat' model, execute the following command.
```bash
python infer_baichuan.py baichuan_benchmark_res.jsonl
```
The prediction results will be stored in the 'model_predict' field, which is a dictionary (dict) structure. The key is the type of prompt, and the corresponding value is the output of the model. In the 'profile', as it contains only one prompt, 'model_predict' is a string, and the corresponding value is the model output.
Below is an example display of a prediction result for 'conclusion' data.
```json
{
"query": "在过去的一年中,哪个月份的全球气温最高?",
"id": "0ae75cf4-8897-5998-b74c-c7ae7202621c",
"type": "conclusion",
"memory_type": "task",
"memory": "...",
"funcs": [],
"memory_last_task": {},
"golden_result_list": [
{
"golden_result":"2022年中,6月是全球最高气温。"
},
{
"golden_result":"2022年中,全球最高气温是6月。"
},
{
"golden_result":"2022年中,全球最高气温是6月。"
},
{
"golden_result":"2022年中,6月是全球最高气温。。"
},
{
"golden_result":"2022年中,全球最高气温是6月。"
}
],
"prompt_input": {...},
"model_predict": {
"ReACT": "2022年7月的全球气温最高,达到了17.23℃。",
"AutoGPT": "在过去的一年中,全球气温最高的月份是2023年4月,全球表面平均气温达到了17.23°C,创下了历史新高。这一数据来自于美国国家海洋和大气管理局与美国缅因大学汇编的初步数据。",
"ModelScope": "在过去一年中,全球气温最高的月份是2023年7月。根据美国国家航空航天局(NASA)和美国国家海洋和大气管理局(NOAA)的最新研究报告,2022年是史上排名第6的最热年份,尽管拉尼娜现象改变了天气模式,抑制了全球气温。然而,2023年7月的全球平均气温达到了17.23℃,超过了7月4日和5日创下的17.18℃的联合纪录,而后者之前曾打破了7月3日创下的17.01℃的纪录。",
"ToolLlama": "Thought: I have found the information about the warmest month in 2022 in the memory. I will use this information to answer the question and call the Finish function to provide the final answer.\nAction: Finish\nAction Input: {\"return_type\": \"give_answer\", \"final_answer\": \"在2022年,全球最暖的月份是5月,比20世纪平均值高出1.39°F(0.77°C)。\"}",
"Kuaiagent": "在过去的一年中,全球气温最高的月份是2023年7月。根据美国国家航空航天局(NASA)和美国国家海洋和大气管理局(NOAA)的最新研究报告,2023年7月的全球平均气温达到了17.23℃,超过了19世纪50年代有仪器记录以来的任何一个月。"
}
}
```
'...' indicates that the content is too extensive. For the sake of convenient display format, the content has been abbreviated.
## Evaluation
Taking the inference results of the 'kagentlms_qwen_7b_mat' model as an example, calculate the metrics based on the benchmark inference results obtained from the 'infer'. Execute the following command to get the overall score.
The first parameter is the benchmark evaluation set, which contains reference responses manually annotated, and the second parameter is the prediction results of the model being evaluated.
```bash
python benchmark_eval.py ./benchmark_eval.jsonl ./qwen_benchmark_res.jsonl
```
The final model score is as follows:
```bash
plan : 31.64, tooluse : 28.26, reflextion : 29.50, conclusion : 44.85, profile : 44.78, overall : 34.20
``` | kwaikeg/KAgentBench | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:zh",
"language:en",
"license:cc-by-nc-sa-4.0",
"arxiv:2312.04889",
"region:us"
]
| 2023-11-17T06:47:21+00:00 | {"language": ["zh", "en"], "license": "cc-by-nc-sa-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-generation"]} | 2024-01-04T12:18:49+00:00 | [
"2312.04889"
]
| [
"zh",
"en"
]
| TAGS
#task_categories-text-generation #size_categories-1K<n<10K #language-Chinese #language-English #license-cc-by-nc-sa-4.0 #arxiv-2312.04889 #region-us
| KAgentBench is the benchmark proposed in KwaiAgents (Github), which is a series of Agent-related works open-sourced by the KwaiKEG from Kuaishou Technology. It contains over 3,000 human-edited, automated evaluation data for testing Agent capabilities, with evaluation dimensions including planning, tool-use, reflection, concluding, and profiling.
Overall statistics of KAgentBench
---------------------------------
---
Experimental results of different LLMs on KAgentBench
-----------------------------------------------------
---
The specific performance of different models on benchmarks can be seen in more detail in our paper.
JSON Format
-----------
---
Each data point is
a dict with the following keys:
* 'id': a unique id for this data point. This is useful for evaluation.
* 'query': a string.
* 'type': a string, the type of this data(plantooluse,reflextion,conclusion,profile).
* 'golden\_result\_list': a list. The reference response.
* 'funcs': a list of functions that may be used in the current query
* 'prompt\_input': a dict,input composed of different prompt templates
* 'memory': a string
* 'memory\_type': a string,types of memory: task, knowledge, conversation
* 'memory\_last\_task': a list, in the case where memory is task, the last task information in the previous round
The overall data format is as follows
How to download benchmark
-------------------------
---
You can download the benchmark evaluation set through kwaikeg/KAgentBench, or you can also download the benchmark evaluation set on KwaiAgents.
The filename of the evaluation set is 'benchmark\_eval.jsonl'. Download the file to your local system.
Environment Setup
-----------------
---
Please make sure you have setup the environment and installed the required packages. Make sure you meet the above requirements, and then install the dependent libraries.
Benchmark Inference
-------------------
To run benchmark evaluations using different models, it is necessary to appropriately load and predict according to the model in the inference script. Different models may have variations in their initialization and loading methods. We have provided inference scripts for both the Qianwen and Baichuan models. Inference for other models can be adapted based on the scripts we have provided.
Taking the provided open-source model kagentlms\_qwen\_7b\_mat as an example.(The kagentlms\_qwen\_7b\_mat model has been uploaded to huggingface)
Run the script to perform benchmark predictions. The parameter is the prediction results file.
Similarly, to perform evaluations using the 'kagentlms\_baichuan2\_13b\_mat' model, execute the following command.
The prediction results will be stored in the 'model\_predict' field, which is a dictionary (dict) structure. The key is the type of prompt, and the corresponding value is the output of the model. In the 'profile', as it contains only one prompt, 'model\_predict' is a string, and the corresponding value is the model output.
Below is an example display of a prediction result for 'conclusion' data.
'...' indicates that the content is too extensive. For the sake of convenient display format, the content has been abbreviated.
Evaluation
----------
Taking the inference results of the 'kagentlms\_qwen\_7b\_mat' model as an example, calculate the metrics based on the benchmark inference results obtained from the 'infer'. Execute the following command to get the overall score.
The first parameter is the benchmark evaluation set, which contains reference responses manually annotated, and the second parameter is the prediction results of the model being evaluated.
The final model score is as follows:
| []
| [
"TAGS\n#task_categories-text-generation #size_categories-1K<n<10K #language-Chinese #language-English #license-cc-by-nc-sa-4.0 #arxiv-2312.04889 #region-us \n"
]
| [
60
]
| [
"passage: TAGS\n#task_categories-text-generation #size_categories-1K<n<10K #language-Chinese #language-English #license-cc-by-nc-sa-4.0 #arxiv-2312.04889 #region-us \n"
]
|
352f228b9783c9e956c89f02379d2aa2762fb54f | # Dataset Card for "data_part02"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | phanvancongthanh/data_part02 | [
"region:us"
]
| 2023-11-17T06:52:06+00:00 | {"dataset_info": {"features": [{"name": "smiles", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5777591178, "num_examples": 138701675}], "download_size": 3034948930, "dataset_size": 5777591178}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-17T06:58:52+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "data_part02"
More Information needed | [
"# Dataset Card for \"data_part02\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"data_part02\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"data_part02\"\n\nMore Information needed"
]
|
f1ad3a389292d99eb14e21c77f36fb9a987dbf94 | # Dataset Card for "no_robots_llama2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Zangs3011/no_robots_llama2 | [
"region:us"
]
| 2023-11-17T06:57:11+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "prompt_id", "dtype": "string"}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "category", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 29092450, "num_examples": 9500}, {"name": "test", "num_bytes": 1560738, "num_examples": 500}], "download_size": 18917122, "dataset_size": 30653188}} | 2023-11-17T06:59:16+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "no_robots_llama2"
More Information needed | [
"# Dataset Card for \"no_robots_llama2\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"no_robots_llama2\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"no_robots_llama2\"\n\nMore Information needed"
]
|
2274e439c5a799ee55459a7d1199a4a307357e11 | # Dataset Card for "data_part03"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | phanvancongthanh/data_part03 | [
"region:us"
]
| 2023-11-17T07:00:09+00:00 | {"dataset_info": {"features": [{"name": "smiles", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4911356191, "num_examples": 109915148}], "download_size": 2471976257, "dataset_size": 4911356191}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-17T07:05:39+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "data_part03"
More Information needed | [
"# Dataset Card for \"data_part03\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"data_part03\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"data_part03\"\n\nMore Information needed"
]
|
3e98cbc41bb5154712c5d0e03c887571f80beed3 | # Dataset Card for "data_part04"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | phanvancongthanh/data_part04 | [
"region:us"
]
| 2023-11-17T07:06:57+00:00 | {"dataset_info": {"features": [{"name": "smiles", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4857438660, "num_examples": 103117853}], "download_size": 2376530922, "dataset_size": 4857438660}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-17T07:12:07+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "data_part04"
More Information needed | [
"# Dataset Card for \"data_part04\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"data_part04\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"data_part04\"\n\nMore Information needed"
]
|
3021c3998027b305cffd687016685b5985be349e | # Dataset Card for "data_part05"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | phanvancongthanh/data_part05 | [
"region:us"
]
| 2023-11-17T07:13:14+00:00 | {"dataset_info": {"features": [{"name": "smiles", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3407302359, "num_examples": 71056934}], "download_size": 1776250538, "dataset_size": 3407302359}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-17T07:17:25+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "data_part05"
More Information needed | [
"# Dataset Card for \"data_part05\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"data_part05\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"data_part05\"\n\nMore Information needed"
]
|
e579c05d8c5aa25fcc871d4ab6019b179c4ee13b |
KAgentInstruct is the instruction-tuning dataset proposed in KwaiAgents ([Github](https://github.com/KwaiKEG/KwaiAgents)), which is a series of Agent-related works open-sourced by the [KwaiKEG](https://github.com/KwaiKEG) from [Kuaishou Technology](https://www.kuaishou.com/en). It contains over 200k agent-related instructions finetuning data (partially human-edited). Note that the dataset does not contain General-type data mentioned in the [paper](https://arxiv.org/pdf/2312.04889.pdf).
## Overall statistics of KAgentInstruct
We incorporate open-source templates ([ReACT](https://github.com/ysymyth/ReAct), [AutoGPT](https://github.com/Significant-Gravitas/AutoGPT), [ToolLLaMA](https://github.com/OpenBMB/ToolBench), [ModelScope](https://github.com/modelscope/modelscope-agent)), the KAgentSys template, and our Meta-Agent generated templates alongside the sampled queries into the experimental agent loop. This resulted in a collection of prompt-response pairs, comprising 224,137 instances, 120,917 queries and 18,005 templates, summarized in the table below.
| #Instances | #Queries | #Templates | Avg. #Steps |
|:---------:|:--------:|:----------:|:-----------:|
| 224,137 | 120,917 | 18,005 | 1.85 |
---
## Data Format
Each entry in the dataset is a dictionary with the following structure:
- `id`: A unique identifier for the entry.
- `query`: The query string.
- `source`: The origin of the data, which is one of the following: 'kwai-agent', 'meta-agent', 'autogpt', 'modelscope', 'react', 'toolllama', 'profile'.
- `functions`: A list of strings, where each string is a JSON object in string form that can be parsed into a dictionary, unless the source is 'meta-agent'.
- `function_names`: A list of function names as strings, corresponding to the functions in the `functions` list.
- `llm_prompt_response`: A list of dict, each containing:
- `instruction`: Instruction text string.
- `input`: Input text string.
- `output`: Output text string.
- `llm_name`: The name of the LLM used, either 'gpt4' or 'gpt3.5'.
- `human_edited`: A Boolean value indicating whether the response was edited by a human.
- `extra_infos`: A dictionary containing additional useful information.
This format is designed for clarity and streamlined access to data points within the dataset.
The overall data format is as follows,
```json
{
"id": "",
"query": "",
"source": "",
"functions": [],
"function_names": [],
"llm_prompt_response": [
{
'instruction': "",
'input': "",
'output': "",
'llm_name': "",
'human_edited': bool
},
...
],
"extra_infos": {}
}
```
---
## How to download KAgentInstruct
You can download the KAgentInstruct through [kwaikeg/KAgentBench](https://huggingface.co/datasets/kwaikeg/KAgentInstruct/tree/main)
---
## Citation
```
@article{pan2023kwaiagents,
author = {Haojie Pan and
Zepeng Zhai and
Hao Yuan and
Yaojia Lv and
Ruiji Fu and
Ming Liu and
Zhongyuan Wang and
Bing Qin
},
title = {KwaiAgents: Generalized Information-seeking Agent System with Large Language Models},
journal = {CoRR},
volume = {abs/2312.04889},
year = {2023}
}
```
| kwaikeg/KAgentInstruct | [
"task_categories:text-generation",
"size_categories:100K<n<200K",
"language:zh",
"language:en",
"license:cc-by-nc-sa-4.0",
"arxiv:2312.04889",
"region:us"
]
| 2023-11-17T07:35:20+00:00 | {"language": ["zh", "en"], "license": "cc-by-nc-sa-4.0", "size_categories": ["100K<n<200K"], "task_categories": ["text-generation"]} | 2024-01-24T09:17:07+00:00 | [
"2312.04889"
]
| [
"zh",
"en"
]
| TAGS
#task_categories-text-generation #size_categories-100K<n<200K #language-Chinese #language-English #license-cc-by-nc-sa-4.0 #arxiv-2312.04889 #region-us
| KAgentInstruct is the instruction-tuning dataset proposed in KwaiAgents (Github), which is a series of Agent-related works open-sourced by the KwaiKEG from Kuaishou Technology. It contains over 200k agent-related instructions finetuning data (partially human-edited). Note that the dataset does not contain General-type data mentioned in the paper.
Overall statistics of KAgentInstruct
------------------------------------
We incorporate open-source templates (ReACT, AutoGPT, ToolLLaMA, ModelScope), the KAgentSys template, and our Meta-Agent generated templates alongside the sampled queries into the experimental agent loop. This resulted in a collection of prompt-response pairs, comprising 224,137 instances, 120,917 queries and 18,005 templates, summarized in the table below.
---
Data Format
-----------
Each entry in the dataset is a dictionary with the following structure:
* 'id': A unique identifier for the entry.
* 'query': The query string.
* 'source': The origin of the data, which is one of the following: 'kwai-agent', 'meta-agent', 'autogpt', 'modelscope', 'react', 'toolllama', 'profile'.
* 'functions': A list of strings, where each string is a JSON object in string form that can be parsed into a dictionary, unless the source is 'meta-agent'.
* 'function\_names': A list of function names as strings, corresponding to the functions in the 'functions' list.
* 'llm\_prompt\_response': A list of dict, each containing:
+ 'instruction': Instruction text string.
+ 'input': Input text string.
+ 'output': Output text string.
+ 'llm\_name': The name of the LLM used, either 'gpt4' or 'gpt3.5'.
+ 'human\_edited': A Boolean value indicating whether the response was edited by a human.
* 'extra\_infos': A dictionary containing additional useful information.
This format is designed for clarity and streamlined access to data points within the dataset.
The overall data format is as follows,
---
How to download KAgentInstruct
------------------------------
You can download the KAgentInstruct through kwaikeg/KAgentBench
---
| []
| [
"TAGS\n#task_categories-text-generation #size_categories-100K<n<200K #language-Chinese #language-English #license-cc-by-nc-sa-4.0 #arxiv-2312.04889 #region-us \n"
]
| [
60
]
| [
"passage: TAGS\n#task_categories-text-generation #size_categories-100K<n<200K #language-Chinese #language-English #license-cc-by-nc-sa-4.0 #arxiv-2312.04889 #region-us \n"
]
|
3d29f938980af93b5d488bab9aabbd8f9ed4815d | # Dataset Card for "dataset-creator-reddit-bestofredditorupdates"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
--- Generated Part of README Below ---
## Dataset Overview
The goal is to have an open dataset of [r/bestofredditorupdates](https://www.reddit.com/r/bestofredditorupdates/) submissions. I'm leveraging PRAW and the Reddit API to get downloads.
There is a limit of 1000 in an API call and limited search functionality, so this is run daily to get new submissions.
## Creation Details
This dataset was created by [derek-thomas/dataset-creator-reddit-bestofredditorupdates](https://huggingface.co/spaces/derek-thomas/dataset-creator-reddit-bestofredditorupdates)
## Update Frequency
The dataset is updated daily with the most recent update being `2024-02-17 05:00:00 UTC+0000` where we added **5 new rows**.
## Licensing
[Reddit Licensing terms](https://www.redditinc.com/policies/data-api-terms) as accessed on October 25:
[License information]
## Opt-out
To opt-out of this dataset please make a request in the community tab
| derek-thomas/dataset-creator-reddit-bestofredditorupdates | [
"region:us"
]
| 2023-11-17T07:44:26+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "content", "dtype": "string"}, {"name": "score", "dtype": "int64"}, {"name": "date_utc", "dtype": "timestamp[ns]"}, {"name": "title", "dtype": "string"}, {"name": "flair", "dtype": "string"}, {"name": "poster", "dtype": "string"}, {"name": "permalink", "dtype": "string"}, {"name": "updated", "dtype": "bool"}, {"name": "new", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 69732395, "num_examples": 10791}], "download_size": 40649561, "dataset_size": 69732395}} | 2024-02-17T05:00:38+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "dataset-creator-reddit-bestofredditorupdates"
More Information needed
--- Generated Part of README Below ---
## Dataset Overview
The goal is to have an open dataset of r/bestofredditorupdates submissions. I'm leveraging PRAW and the Reddit API to get downloads.
There is a limit of 1000 in an API call and limited search functionality, so this is run daily to get new submissions.
## Creation Details
This dataset was created by derek-thomas/dataset-creator-reddit-bestofredditorupdates
## Update Frequency
The dataset is updated daily with the most recent update being '2024-02-17 05:00:00 UTC+0000' where we added 5 new rows.
## Licensing
Reddit Licensing terms as accessed on October 25:
[License information]
## Opt-out
To opt-out of this dataset please make a request in the community tab
| [
"# Dataset Card for \"dataset-creator-reddit-bestofredditorupdates\"\n\nMore Information needed\n\n\n--- Generated Part of README Below ---",
"## Dataset Overview\nThe goal is to have an open dataset of r/bestofredditorupdates submissions. I'm leveraging PRAW and the Reddit API to get downloads.\n\nThere is a limit of 1000 in an API call and limited search functionality, so this is run daily to get new submissions.",
"## Creation Details\nThis dataset was created by derek-thomas/dataset-creator-reddit-bestofredditorupdates",
"## Update Frequency\nThe dataset is updated daily with the most recent update being '2024-02-17 05:00:00 UTC+0000' where we added 5 new rows.",
"## Licensing \nReddit Licensing terms as accessed on October 25:\n[License information]",
"## Opt-out\nTo opt-out of this dataset please make a request in the community tab"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"dataset-creator-reddit-bestofredditorupdates\"\n\nMore Information needed\n\n\n--- Generated Part of README Below ---",
"## Dataset Overview\nThe goal is to have an open dataset of r/bestofredditorupdates submissions. I'm leveraging PRAW and the Reddit API to get downloads.\n\nThere is a limit of 1000 in an API call and limited search functionality, so this is run daily to get new submissions.",
"## Creation Details\nThis dataset was created by derek-thomas/dataset-creator-reddit-bestofredditorupdates",
"## Update Frequency\nThe dataset is updated daily with the most recent update being '2024-02-17 05:00:00 UTC+0000' where we added 5 new rows.",
"## Licensing \nReddit Licensing terms as accessed on October 25:\n[License information]",
"## Opt-out\nTo opt-out of this dataset please make a request in the community tab"
]
| [
6,
36,
70,
32,
37,
21,
21
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"dataset-creator-reddit-bestofredditorupdates\"\n\nMore Information needed\n\n\n--- Generated Part of README Below ---## Dataset Overview\nThe goal is to have an open dataset of r/bestofredditorupdates submissions. I'm leveraging PRAW and the Reddit API to get downloads.\n\nThere is a limit of 1000 in an API call and limited search functionality, so this is run daily to get new submissions.## Creation Details\nThis dataset was created by derek-thomas/dataset-creator-reddit-bestofredditorupdates## Update Frequency\nThe dataset is updated daily with the most recent update being '2024-02-17 05:00:00 UTC+0000' where we added 5 new rows.## Licensing \nReddit Licensing terms as accessed on October 25:\n[License information]## Opt-out\nTo opt-out of this dataset please make a request in the community tab"
]
|
ea680b8ef35b6e5f62cc4162df693d0c73da23d5 | ### solkogan/SolDataset1
Датасет для тренировки инструкционной и диалоговой модели
### Citation
```
@MISC{solkogan/SolDataset1,
author = {Ivan Ramovich, Denis Petrov},
title = {Russian dataset for Conversational models},
url = {https://huggingface.co/datasets/solkogan/SolDataset1},
year = 2023
}
``` | solkogan/SolDataset1 | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_categories:conversational",
"size_categories:100K<n<1M",
"language:ru",
"license:mit",
"region:us"
]
| 2023-11-17T08:14:24+00:00 | {"language": ["ru"], "license": "mit", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation", "text2text-generation", "conversational"]} | 2023-11-17T08:53:20+00:00 | []
| [
"ru"
]
| TAGS
#task_categories-text-generation #task_categories-text2text-generation #task_categories-conversational #size_categories-100K<n<1M #language-Russian #license-mit #region-us
| ### solkogan/SolDataset1
Датасет для тренировки инструкционной и диалоговой модели
| [
"### solkogan/SolDataset1\nДатасет для тренировки инструкционной и диалоговой модели"
]
| [
"TAGS\n#task_categories-text-generation #task_categories-text2text-generation #task_categories-conversational #size_categories-100K<n<1M #language-Russian #license-mit #region-us \n",
"### solkogan/SolDataset1\nДатасет для тренировки инструкционной и диалоговой модели"
]
| [
62,
22
]
| [
"passage: TAGS\n#task_categories-text-generation #task_categories-text2text-generation #task_categories-conversational #size_categories-100K<n<1M #language-Russian #license-mit #region-us \n### solkogan/SolDataset1\nДатасет для тренировки инструкционной и диалоговой модели"
]
|
ef82719e1c463b8c716b1167b7a67eb1ee81b1b6 | # Dataset Card for "zalo_ai_math_alpaca_no_input"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | MonKira/zalo_ai_math_alpaca_no_input | [
"region:us"
]
| 2023-11-17T08:18:13+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 721563, "num_examples": 1188}, {"name": "test", "num_bytes": 84975, "num_examples": 189}], "download_size": 220273, "dataset_size": 806538}} | 2023-11-17T08:18:19+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "zalo_ai_math_alpaca_no_input"
More Information needed | [
"# Dataset Card for \"zalo_ai_math_alpaca_no_input\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"zalo_ai_math_alpaca_no_input\"\n\nMore Information needed"
]
| [
6,
24
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"zalo_ai_math_alpaca_no_input\"\n\nMore Information needed"
]
|
07b3f1cf7b166ad52b37de80e8b49ce9475d28d4 |
# Skin Defects Dataset
The dataset contains images of individuals with various skin conditions: **acne, skin redness, and bags under the eyes**. Each person is represented by **3 images** showcasing their specific skin issue. The dataset encompasses diverse *demographics, age, ethnicities, and genders.*
**Types of defects in the dataset**: acne, skin redness & bags under the eyes

- **Acne photos**: display different severities and types of acne such as whiteheads, blackheads, and cystic acne.
- **Skin redness photos**: display individuals with this condition, which may be caused by rosacea or eczema.
- **Bags under the eyes photos**: depicts individuals with noticeable bags under their eyes, often associated with lack of sleep, aging, or genetics.
## Full version of the dataset includes much more photos of people, leave a request on **[TrainingData](https://trainingdata.pro/data-market/skin-problems?utm_source=huggingface&utm_medium=cpc&utm_campaign=skin-defects)** to buy the dataset
The dataset is a valuable resource for researchers, developers, and organizations working at the **dermatology, cosmetics and medical sphere** to train, evaluate, and fine-tune **AI models** for real-world applications. It can be applied in various domains like *skincare, scientific research and advertising*.
# Get the Dataset
## This is just an example of the data
Leave a request on **[https://trainingdata.pro/data-market](https://trainingdata.pro/data-market/skin-problems?utm_source=huggingface&utm_medium=cpc&utm_campaign=skin-defects) to learn about the price and buy the dataset**
# Content
The folder **files** includes:
- **3 folders** with images of people with the conditions mentioned in the name of the folder (**acne, skin redness or bags under the eyes**)
- each folder includes sub-folders with **3 images** of each person from different angles: **front, left side and right side**

### File with the extension .csv
- **id**: id of the person,
- **front**: link to access the front photo,
- **left_side**: link to access the left side's photo,
- **right_side**: link to access the right side's photo,
- **type**: type of the defect (**acne, skin redness or bags under the eyes**)
## **[TrainingData](https://trainingdata.pro/data-market/skin-problems?utm_source=huggingface&utm_medium=cpc&utm_campaign=skin-defects)** provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets**
*keywords: biometric dataset, face recognition database, face recognition dataset, face detection dataset, facial analysis, dermatology dataset, skin on the face, IGA scale, medical data, whiteheads, blackheads, cystic acne, rosacea, eczema disease dataset, cosmetology, multi-task learning approach, facial acne image dataset, bumps on face, facial skin lesions, skin conditions, skin images, skin characteristics, automatic facial skin defect detection system, human face images, acne marks, stains, skincare, skin problems, skin disease dataset, human images, deep learning, computer vision* | TrainingDataPro/dermatology-dataset-acne-redness-and-bags-under-the-eyes | [
"task_categories:image-classification",
"task_categories:image-to-image",
"task_categories:object-detection",
"language:en",
"license:cc-by-nc-nd-4.0",
"medical",
"biology",
"code",
"region:us"
]
| 2023-11-17T08:40:20+00:00 | {"language": ["en"], "license": "cc-by-nc-nd-4.0", "task_categories": ["image-classification", "image-to-image", "object-detection"], "tags": ["medical", "biology", "code"], "dataset_info": {"features": [{"name": "id", "dtype": "uint16"}, {"name": "front", "dtype": "image"}, {"name": "left_side", "dtype": "image"}, {"name": "right_side", "dtype": "image"}, {"name": "type", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 691862223.0, "num_examples": 30}], "download_size": 691900290, "dataset_size": 691862223.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-17T12:07:00+00:00 | []
| [
"en"
]
| TAGS
#task_categories-image-classification #task_categories-image-to-image #task_categories-object-detection #language-English #license-cc-by-nc-nd-4.0 #medical #biology #code #region-us
|
# Skin Defects Dataset
The dataset contains images of individuals with various skin conditions: acne, skin redness, and bags under the eyes. Each person is represented by 3 images showcasing their specific skin issue. The dataset encompasses diverse *demographics, age, ethnicities, and genders.*
Types of defects in the dataset: acne, skin redness & bags under the eyes

- each folder includes sub-folders with 3 images of each person from different angles: front, left side and right side

## TrainingData provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: URL
TrainingData's GitHub: URL
*keywords: biometric dataset, face recognition database, face recognition dataset, face detection dataset, facial analysis, dermatology dataset, skin on the face, IGA scale, medical data, whiteheads, blackheads, cystic acne, rosacea, eczema disease dataset, cosmetology, multi-task learning approach, facial acne image dataset, bumps on face, facial skin lesions, skin conditions, skin images, skin characteristics, automatic facial skin defect detection system, human face images, acne marks, stains, skincare, skin problems, skin disease dataset, human images, deep learning, computer vision* | [
"# Skin Defects Dataset\n\nThe dataset contains images of individuals with various skin conditions: acne, skin redness, and bags under the eyes. Each person is represented by 3 images showcasing their specific skin issue. The dataset encompasses diverse *demographics, age, ethnicities, and genders.*\n\nTypes of defects in the dataset: acne, skin redness & bags under the eyes\n\n\n- each folder includes sub-folders with 3 images of each person from different angles: front, left side and right side\n\n",
"## TrainingData provides high-quality data annotation tailored to your needs\n\nMore datasets in TrainingData's Kaggle account: URL\n\nTrainingData's GitHub: URL\n\n*keywords: biometric dataset, face recognition database, face recognition dataset, face detection dataset, facial analysis, dermatology dataset, skin on the face, IGA scale, medical data, whiteheads, blackheads, cystic acne, rosacea, eczema disease dataset, cosmetology, multi-task learning approach, facial acne image dataset, bumps on face, facial skin lesions, skin conditions, skin images, skin characteristics, automatic facial skin defect detection system, human face images, acne marks, stains, skincare, skin problems, skin disease dataset, human images, deep learning, computer vision*"
]
| [
"TAGS\n#task_categories-image-classification #task_categories-image-to-image #task_categories-object-detection #language-English #license-cc-by-nc-nd-4.0 #medical #biology #code #region-us \n",
"# Skin Defects Dataset\n\nThe dataset contains images of individuals with various skin conditions: acne, skin redness, and bags under the eyes. Each person is represented by 3 images showcasing their specific skin issue. The dataset encompasses diverse *demographics, age, ethnicities, and genders.*\n\nTypes of defects in the dataset: acne, skin redness & bags under the eyes\n\n\n- each folder includes sub-folders with 3 images of each person from different angles: front, left side and right side\n\n",
"## TrainingData provides high-quality data annotation tailored to your needs\n\nMore datasets in TrainingData's Kaggle account: URL\n\nTrainingData's GitHub: URL\n\n*keywords: biometric dataset, face recognition database, face recognition dataset, face detection dataset, facial analysis, dermatology dataset, skin on the face, IGA scale, medical data, whiteheads, blackheads, cystic acne, rosacea, eczema disease dataset, cosmetology, multi-task learning approach, facial acne image dataset, bumps on face, facial skin lesions, skin conditions, skin images, skin characteristics, automatic facial skin defect detection system, human face images, acne marks, stains, skincare, skin problems, skin disease dataset, human images, deep learning, computer vision*"
]
| [
65,
188,
92,
5,
24,
69,
78,
187
]
| [
"passage: TAGS\n#task_categories-image-classification #task_categories-image-to-image #task_categories-object-detection #language-English #license-cc-by-nc-nd-4.0 #medical #biology #code #region-us \n# Skin Defects Dataset\n\nThe dataset contains images of individuals with various skin conditions: acne, skin redness, and bags under the eyes. Each person is represented by 3 images showcasing their specific skin issue. The dataset encompasses diverse *demographics, age, ethnicities, and genders.*\n\nTypes of defects in the dataset: acne, skin redness & bags under the eyes\n\n\n- each folder includes sub-folders with 3 images of each person from different angles: front, left side and right side\n\n. Here's a candid reflection:
1. Current LLMs and their Limitations: Let's be unequivocally clear—present-day language models, including transformers, are not a direct path to AGI. They are sophisticated token predictors, highly skilled in generalizing from vast datasets but lacking true understanding. They operate in what might be termed the 'dog-AGI' phase—impressive, yes, but nowhere close to the 'god-AGI' phase we aspire to reach.
2. The Nature of 'Smart': These models, for all their complexity, are not sentient. They don't possess the rich tapestry of human experience—our memories, relationships, and 'eureka' moments that constitute learning and wisdom. They are yet to evolve from merely processing information to experiencing and understanding the nuances of life as we know it.
3. Stockpiling NVIDIA cards and accumulating GPU clusters is not the golden ticket to AGI. The pursuit of AGI is not solely a quest for more processing power. It is a deeper, more philosophical journey where:
- Space Outposts and Ion Engines: Mankind should expandd beyond the terrestrial, reaching for space outposts and harnessing commercialized ion engines for space travel. Ion engines, with their extended operational capacity, liberate us from the constraints of chemical fuel, enabling voyages that stretch both time and distance.
- Asteroid Mining and the Periodic Table: The quest for AGI is mirrored in our endeavor to mine asteroids, potentially revealing new elements that could add unknown dimensions to our periodic table. This is not merely resource extraction; it is an exploration that feeds into the self-iterative learning nature of AGI, fostering an intelligence that grows with each discovery.
- Nuclear Mass Energy and Helium-3: We look beyond silicon to the immense potential of nuclear mass energy. Helium-3, fused from deuterium in high-efficiency fusion generators, represents a future energy source that could power the next leaps in AGI development. Overcoming the scarcity of Helium-3 is a challenge we are poised to tackle, paving the way for a new era of energy abundance.
4. The Road Ahead: As we venture into the unknown, let's reimagine our approach. We seek an AI that lives a 'life', so to speak, with context vectors representing not just data points but the essence of existence itself. Imagine an AI with a library of experiences, including life choices and personal growth, akin to a human with 60 years of rich, varied living.
## Overview
The know-saraswati-cot dataset is a curated collection of examples designed to train and evaluate large language models (LLMs) on stream of consciousness (SoC), chain of thought (CoT), and logical reasoning. Named after Saraswati, the Hindu goddess of knowledge, wisdom, and learning, this dataset embodies the spirit of open-source knowledge sharing. It is an ode to democratizing knowledge, making it as accessible as the flowing waters of the mythical Saraswati river.
With addtional 30,000 code reasoning examples and various other deep reasoning scenarios, this dataset aims to imbue LLMs with a profound capacity for understanding, reasoning, and decision-making.
## Dataset Structure
Each entry in the know-saraswati-cot dataset comprises an instruction and an output field. Same old stuff, i like this format. The instruction provides a scenario or question that requires deep thinking, inviting the model to engage in a step-by-step reasoning process. The output then captures a reasoned response that aligns with the principles of logical deduction and stream of consciousness thought.
The know-saraswati-cot dataset has been meticulously crafted to reflect the intricacies of human-like reasoning. Here are some key specifications:
- Concise Reasoning: The majority of examples are concisely formulated within 500 tokens, fostering quick and efficient chains of thought (CoT). This simulates the succinct yet profound reasoning processes akin to human cognition.
- Multi-Turn Interactions: Some entries are designed as multi-turn interactions, allowing models to engage in a deeper and more dynamic discourse. This emulates real-world conversations where dialogues build upon previous exchanges.
- Extended Discussions: A subset of the dataset accommodates scenarios extending up to 2000 tokens for comprehensive reasoning tasks. These are tailored to model how a sapient being would thoughtfully respond to complex logic puzzles, as opposed to the often superficial and tangential responses generated by less sophisticated models.
- Each example is tailored to how an actual sapien would reason and respond, capturing the essence of human logic, emotion, and cognition. This approach aims to steer AI responses away from the undeveloped and extraneous (which usually llms respond with), guiding them towards relevance and depth that truly address the query at hand.
## Inspiration
Inspired by the vision of making knowledge free and accessible for all, akin to the way Goddess Saraswati is revered for her gifts of learning and enlightenment, this dataset was synthesized using GPT-4. A special pranaam and blessings 🙏 from my brother, whose vision of a frugally enlightened world where knowledge is a common wealth has been the cornerstone of this endeavor.
## Use Cases
The know-saraswati-cot dataset can be utilized to:
1. By providing rich, nuanced examples of logical reasoning, the dataset is perfect for developing models that can mimic the depth of human thought processes.
2. Researchers can leverage the dataset to investigate how AI models can not only reach conclusions but also articulate the reasoning behind their decisions, making AI workings more transparent.
3. know-saraswati-cot can foster AI development that intersects with philosophy, literature, and the Engineering, encouraging holistic and multidimensional growth in AI capabilities.
4. have fun | knowrohit07/know-saraswati-cot | [
"license:openrail",
"region:us"
]
| 2023-11-17T08:46:58+00:00 | {"license": "openrail"} | 2023-11-21T22:39:23+00:00 | []
| []
| TAGS
#license-openrail #region-us
|
### To all devs, scholars, and also fugazis of AI - A Philosophical Standpoint on AGI:
- This is extraneous, if you have time to read it-- give it a shot. We stand at the precipice of a digital era where the notions of artificial intelligence are often muddled with the grandiose idea of Artificial General Intelligence (AGI). Here's a candid reflection:
1. Current LLMs and their Limitations: Let's be unequivocally clear—present-day language models, including transformers, are not a direct path to AGI. They are sophisticated token predictors, highly skilled in generalizing from vast datasets but lacking true understanding. They operate in what might be termed the 'dog-AGI' phase—impressive, yes, but nowhere close to the 'god-AGI' phase we aspire to reach.
2. The Nature of 'Smart': These models, for all their complexity, are not sentient. They don't possess the rich tapestry of human experience—our memories, relationships, and 'eureka' moments that constitute learning and wisdom. They are yet to evolve from merely processing information to experiencing and understanding the nuances of life as we know it.
3. Stockpiling NVIDIA cards and accumulating GPU clusters is not the golden ticket to AGI. The pursuit of AGI is not solely a quest for more processing power. It is a deeper, more philosophical journey where:
- Space Outposts and Ion Engines: Mankind should expandd beyond the terrestrial, reaching for space outposts and harnessing commercialized ion engines for space travel. Ion engines, with their extended operational capacity, liberate us from the constraints of chemical fuel, enabling voyages that stretch both time and distance.
- Asteroid Mining and the Periodic Table: The quest for AGI is mirrored in our endeavor to mine asteroids, potentially revealing new elements that could add unknown dimensions to our periodic table. This is not merely resource extraction; it is an exploration that feeds into the self-iterative learning nature of AGI, fostering an intelligence that grows with each discovery.
- Nuclear Mass Energy and Helium-3: We look beyond silicon to the immense potential of nuclear mass energy. Helium-3, fused from deuterium in high-efficiency fusion generators, represents a future energy source that could power the next leaps in AGI development. Overcoming the scarcity of Helium-3 is a challenge we are poised to tackle, paving the way for a new era of energy abundance.
4. The Road Ahead: As we venture into the unknown, let's reimagine our approach. We seek an AI that lives a 'life', so to speak, with context vectors representing not just data points but the essence of existence itself. Imagine an AI with a library of experiences, including life choices and personal growth, akin to a human with 60 years of rich, varied living.
## Overview
The know-saraswati-cot dataset is a curated collection of examples designed to train and evaluate large language models (LLMs) on stream of consciousness (SoC), chain of thought (CoT), and logical reasoning. Named after Saraswati, the Hindu goddess of knowledge, wisdom, and learning, this dataset embodies the spirit of open-source knowledge sharing. It is an ode to democratizing knowledge, making it as accessible as the flowing waters of the mythical Saraswati river.
With addtional 30,000 code reasoning examples and various other deep reasoning scenarios, this dataset aims to imbue LLMs with a profound capacity for understanding, reasoning, and decision-making.
## Dataset Structure
Each entry in the know-saraswati-cot dataset comprises an instruction and an output field. Same old stuff, i like this format. The instruction provides a scenario or question that requires deep thinking, inviting the model to engage in a step-by-step reasoning process. The output then captures a reasoned response that aligns with the principles of logical deduction and stream of consciousness thought.
The know-saraswati-cot dataset has been meticulously crafted to reflect the intricacies of human-like reasoning. Here are some key specifications:
- Concise Reasoning: The majority of examples are concisely formulated within 500 tokens, fostering quick and efficient chains of thought (CoT). This simulates the succinct yet profound reasoning processes akin to human cognition.
- Multi-Turn Interactions: Some entries are designed as multi-turn interactions, allowing models to engage in a deeper and more dynamic discourse. This emulates real-world conversations where dialogues build upon previous exchanges.
- Extended Discussions: A subset of the dataset accommodates scenarios extending up to 2000 tokens for comprehensive reasoning tasks. These are tailored to model how a sapient being would thoughtfully respond to complex logic puzzles, as opposed to the often superficial and tangential responses generated by less sophisticated models.
- Each example is tailored to how an actual sapien would reason and respond, capturing the essence of human logic, emotion, and cognition. This approach aims to steer AI responses away from the undeveloped and extraneous (which usually llms respond with), guiding them towards relevance and depth that truly address the query at hand.
## Inspiration
Inspired by the vision of making knowledge free and accessible for all, akin to the way Goddess Saraswati is revered for her gifts of learning and enlightenment, this dataset was synthesized using GPT-4. A special pranaam and blessings from my brother, whose vision of a frugally enlightened world where knowledge is a common wealth has been the cornerstone of this endeavor.
## Use Cases
The know-saraswati-cot dataset can be utilized to:
1. By providing rich, nuanced examples of logical reasoning, the dataset is perfect for developing models that can mimic the depth of human thought processes.
2. Researchers can leverage the dataset to investigate how AI models can not only reach conclusions but also articulate the reasoning behind their decisions, making AI workings more transparent.
3. know-saraswati-cot can foster AI development that intersects with philosophy, literature, and the Engineering, encouraging holistic and multidimensional growth in AI capabilities.
4. have fun | [
"### To all devs, scholars, and also fugazis of AI - A Philosophical Standpoint on AGI:\n\n- This is extraneous, if you have time to read it-- give it a shot. We stand at the precipice of a digital era where the notions of artificial intelligence are often muddled with the grandiose idea of Artificial General Intelligence (AGI). Here's a candid reflection:\n\n1. Current LLMs and their Limitations: Let's be unequivocally clear—present-day language models, including transformers, are not a direct path to AGI. They are sophisticated token predictors, highly skilled in generalizing from vast datasets but lacking true understanding. They operate in what might be termed the 'dog-AGI' phase—impressive, yes, but nowhere close to the 'god-AGI' phase we aspire to reach.\n\n2. The Nature of 'Smart': These models, for all their complexity, are not sentient. They don't possess the rich tapestry of human experience—our memories, relationships, and 'eureka' moments that constitute learning and wisdom. They are yet to evolve from merely processing information to experiencing and understanding the nuances of life as we know it.\n\n3. Stockpiling NVIDIA cards and accumulating GPU clusters is not the golden ticket to AGI. The pursuit of AGI is not solely a quest for more processing power. It is a deeper, more philosophical journey where:\n\n- Space Outposts and Ion Engines: Mankind should expandd beyond the terrestrial, reaching for space outposts and harnessing commercialized ion engines for space travel. Ion engines, with their extended operational capacity, liberate us from the constraints of chemical fuel, enabling voyages that stretch both time and distance.\n\n- Asteroid Mining and the Periodic Table: The quest for AGI is mirrored in our endeavor to mine asteroids, potentially revealing new elements that could add unknown dimensions to our periodic table. This is not merely resource extraction; it is an exploration that feeds into the self-iterative learning nature of AGI, fostering an intelligence that grows with each discovery.\n\n- Nuclear Mass Energy and Helium-3: We look beyond silicon to the immense potential of nuclear mass energy. Helium-3, fused from deuterium in high-efficiency fusion generators, represents a future energy source that could power the next leaps in AGI development. Overcoming the scarcity of Helium-3 is a challenge we are poised to tackle, paving the way for a new era of energy abundance.\n\n4. The Road Ahead: As we venture into the unknown, let's reimagine our approach. We seek an AI that lives a 'life', so to speak, with context vectors representing not just data points but the essence of existence itself. Imagine an AI with a library of experiences, including life choices and personal growth, akin to a human with 60 years of rich, varied living.",
"## Overview\nThe know-saraswati-cot dataset is a curated collection of examples designed to train and evaluate large language models (LLMs) on stream of consciousness (SoC), chain of thought (CoT), and logical reasoning. Named after Saraswati, the Hindu goddess of knowledge, wisdom, and learning, this dataset embodies the spirit of open-source knowledge sharing. It is an ode to democratizing knowledge, making it as accessible as the flowing waters of the mythical Saraswati river.\n\nWith addtional 30,000 code reasoning examples and various other deep reasoning scenarios, this dataset aims to imbue LLMs with a profound capacity for understanding, reasoning, and decision-making.",
"## Dataset Structure\n\nEach entry in the know-saraswati-cot dataset comprises an instruction and an output field. Same old stuff, i like this format. The instruction provides a scenario or question that requires deep thinking, inviting the model to engage in a step-by-step reasoning process. The output then captures a reasoned response that aligns with the principles of logical deduction and stream of consciousness thought.\nThe know-saraswati-cot dataset has been meticulously crafted to reflect the intricacies of human-like reasoning. Here are some key specifications:\n\n- Concise Reasoning: The majority of examples are concisely formulated within 500 tokens, fostering quick and efficient chains of thought (CoT). This simulates the succinct yet profound reasoning processes akin to human cognition.\n\n- Multi-Turn Interactions: Some entries are designed as multi-turn interactions, allowing models to engage in a deeper and more dynamic discourse. This emulates real-world conversations where dialogues build upon previous exchanges.\n\n- Extended Discussions: A subset of the dataset accommodates scenarios extending up to 2000 tokens for comprehensive reasoning tasks. These are tailored to model how a sapient being would thoughtfully respond to complex logic puzzles, as opposed to the often superficial and tangential responses generated by less sophisticated models.\n\n- Each example is tailored to how an actual sapien would reason and respond, capturing the essence of human logic, emotion, and cognition. This approach aims to steer AI responses away from the undeveloped and extraneous (which usually llms respond with), guiding them towards relevance and depth that truly address the query at hand.",
"## Inspiration\nInspired by the vision of making knowledge free and accessible for all, akin to the way Goddess Saraswati is revered for her gifts of learning and enlightenment, this dataset was synthesized using GPT-4. A special pranaam and blessings from my brother, whose vision of a frugally enlightened world where knowledge is a common wealth has been the cornerstone of this endeavor.",
"## Use Cases\nThe know-saraswati-cot dataset can be utilized to:\n\n1. By providing rich, nuanced examples of logical reasoning, the dataset is perfect for developing models that can mimic the depth of human thought processes.\n\n2. Researchers can leverage the dataset to investigate how AI models can not only reach conclusions but also articulate the reasoning behind their decisions, making AI workings more transparent.\n\n3. know-saraswati-cot can foster AI development that intersects with philosophy, literature, and the Engineering, encouraging holistic and multidimensional growth in AI capabilities.\n\n4. have fun"
]
| [
"TAGS\n#license-openrail #region-us \n",
"### To all devs, scholars, and also fugazis of AI - A Philosophical Standpoint on AGI:\n\n- This is extraneous, if you have time to read it-- give it a shot. We stand at the precipice of a digital era where the notions of artificial intelligence are often muddled with the grandiose idea of Artificial General Intelligence (AGI). Here's a candid reflection:\n\n1. Current LLMs and their Limitations: Let's be unequivocally clear—present-day language models, including transformers, are not a direct path to AGI. They are sophisticated token predictors, highly skilled in generalizing from vast datasets but lacking true understanding. They operate in what might be termed the 'dog-AGI' phase—impressive, yes, but nowhere close to the 'god-AGI' phase we aspire to reach.\n\n2. The Nature of 'Smart': These models, for all their complexity, are not sentient. They don't possess the rich tapestry of human experience—our memories, relationships, and 'eureka' moments that constitute learning and wisdom. They are yet to evolve from merely processing information to experiencing and understanding the nuances of life as we know it.\n\n3. Stockpiling NVIDIA cards and accumulating GPU clusters is not the golden ticket to AGI. The pursuit of AGI is not solely a quest for more processing power. It is a deeper, more philosophical journey where:\n\n- Space Outposts and Ion Engines: Mankind should expandd beyond the terrestrial, reaching for space outposts and harnessing commercialized ion engines for space travel. Ion engines, with their extended operational capacity, liberate us from the constraints of chemical fuel, enabling voyages that stretch both time and distance.\n\n- Asteroid Mining and the Periodic Table: The quest for AGI is mirrored in our endeavor to mine asteroids, potentially revealing new elements that could add unknown dimensions to our periodic table. This is not merely resource extraction; it is an exploration that feeds into the self-iterative learning nature of AGI, fostering an intelligence that grows with each discovery.\n\n- Nuclear Mass Energy and Helium-3: We look beyond silicon to the immense potential of nuclear mass energy. Helium-3, fused from deuterium in high-efficiency fusion generators, represents a future energy source that could power the next leaps in AGI development. Overcoming the scarcity of Helium-3 is a challenge we are poised to tackle, paving the way for a new era of energy abundance.\n\n4. The Road Ahead: As we venture into the unknown, let's reimagine our approach. We seek an AI that lives a 'life', so to speak, with context vectors representing not just data points but the essence of existence itself. Imagine an AI with a library of experiences, including life choices and personal growth, akin to a human with 60 years of rich, varied living.",
"## Overview\nThe know-saraswati-cot dataset is a curated collection of examples designed to train and evaluate large language models (LLMs) on stream of consciousness (SoC), chain of thought (CoT), and logical reasoning. Named after Saraswati, the Hindu goddess of knowledge, wisdom, and learning, this dataset embodies the spirit of open-source knowledge sharing. It is an ode to democratizing knowledge, making it as accessible as the flowing waters of the mythical Saraswati river.\n\nWith addtional 30,000 code reasoning examples and various other deep reasoning scenarios, this dataset aims to imbue LLMs with a profound capacity for understanding, reasoning, and decision-making.",
"## Dataset Structure\n\nEach entry in the know-saraswati-cot dataset comprises an instruction and an output field. Same old stuff, i like this format. The instruction provides a scenario or question that requires deep thinking, inviting the model to engage in a step-by-step reasoning process. The output then captures a reasoned response that aligns with the principles of logical deduction and stream of consciousness thought.\nThe know-saraswati-cot dataset has been meticulously crafted to reflect the intricacies of human-like reasoning. Here are some key specifications:\n\n- Concise Reasoning: The majority of examples are concisely formulated within 500 tokens, fostering quick and efficient chains of thought (CoT). This simulates the succinct yet profound reasoning processes akin to human cognition.\n\n- Multi-Turn Interactions: Some entries are designed as multi-turn interactions, allowing models to engage in a deeper and more dynamic discourse. This emulates real-world conversations where dialogues build upon previous exchanges.\n\n- Extended Discussions: A subset of the dataset accommodates scenarios extending up to 2000 tokens for comprehensive reasoning tasks. These are tailored to model how a sapient being would thoughtfully respond to complex logic puzzles, as opposed to the often superficial and tangential responses generated by less sophisticated models.\n\n- Each example is tailored to how an actual sapien would reason and respond, capturing the essence of human logic, emotion, and cognition. This approach aims to steer AI responses away from the undeveloped and extraneous (which usually llms respond with), guiding them towards relevance and depth that truly address the query at hand.",
"## Inspiration\nInspired by the vision of making knowledge free and accessible for all, akin to the way Goddess Saraswati is revered for her gifts of learning and enlightenment, this dataset was synthesized using GPT-4. A special pranaam and blessings from my brother, whose vision of a frugally enlightened world where knowledge is a common wealth has been the cornerstone of this endeavor.",
"## Use Cases\nThe know-saraswati-cot dataset can be utilized to:\n\n1. By providing rich, nuanced examples of logical reasoning, the dataset is perfect for developing models that can mimic the depth of human thought processes.\n\n2. Researchers can leverage the dataset to investigate how AI models can not only reach conclusions but also articulate the reasoning behind their decisions, making AI workings more transparent.\n\n3. know-saraswati-cot can foster AI development that intersects with philosophy, literature, and the Engineering, encouraging holistic and multidimensional growth in AI capabilities.\n\n4. have fun"
]
| [
12,
689,
167,
400,
92,
141
]
| [
"passage: TAGS\n#license-openrail #region-us \n",
"passage: ### To all devs, scholars, and also fugazis of AI - A Philosophical Standpoint on AGI:\n\n- This is extraneous, if you have time to read it-- give it a shot. We stand at the precipice of a digital era where the notions of artificial intelligence are often muddled with the grandiose idea of Artificial General Intelligence (AGI). Here's a candid reflection:\n\n1. Current LLMs and their Limitations: Let's be unequivocally clear—present-day language models, including transformers, are not a direct path to AGI. They are sophisticated token predictors, highly skilled in generalizing from vast datasets but lacking true understanding. They operate in what might be termed the 'dog-AGI' phase—impressive, yes, but nowhere close to the 'god-AGI' phase we aspire to reach.\n\n2. The Nature of 'Smart': These models, for all their complexity, are not sentient. They don't possess the rich tapestry of human experience—our memories, relationships, and 'eureka' moments that constitute learning and wisdom. They are yet to evolve from merely processing information to experiencing and understanding the nuances of life as we know it.\n\n3. Stockpiling NVIDIA cards and accumulating GPU clusters is not the golden ticket to AGI. The pursuit of AGI is not solely a quest for more processing power. It is a deeper, more philosophical journey where:\n\n- Space Outposts and Ion Engines: Mankind should expandd beyond the terrestrial, reaching for space outposts and harnessing commercialized ion engines for space travel. Ion engines, with their extended operational capacity, liberate us from the constraints of chemical fuel, enabling voyages that stretch both time and distance.\n\n- Asteroid Mining and the Periodic Table: The quest for AGI is mirrored in our endeavor to mine asteroids, potentially revealing new elements that could add unknown dimensions to our periodic table. This is not merely resource extraction; it is an exploration that feeds into the self-iterative learning nature of AGI, fostering an intelligence that grows with each discovery.\n\n- Nuclear Mass Energy and Helium-3: We look beyond silicon to the immense potential of nuclear mass energy. Helium-3, fused from deuterium in high-efficiency fusion generators, represents a future energy source that could power the next leaps in AGI development. Overcoming the scarcity of Helium-3 is a challenge we are poised to tackle, paving the way for a new era of energy abundance.\n\n4. The Road Ahead: As we venture into the unknown, let's reimagine our approach. We seek an AI that lives a 'life', so to speak, with context vectors representing not just data points but the essence of existence itself. Imagine an AI with a library of experiences, including life choices and personal growth, akin to a human with 60 years of rich, varied living.## Overview\nThe know-saraswati-cot dataset is a curated collection of examples designed to train and evaluate large language models (LLMs) on stream of consciousness (SoC), chain of thought (CoT), and logical reasoning. Named after Saraswati, the Hindu goddess of knowledge, wisdom, and learning, this dataset embodies the spirit of open-source knowledge sharing. It is an ode to democratizing knowledge, making it as accessible as the flowing waters of the mythical Saraswati river.\n\nWith addtional 30,000 code reasoning examples and various other deep reasoning scenarios, this dataset aims to imbue LLMs with a profound capacity for understanding, reasoning, and decision-making."
]
|
997892d49102d6991ba48873473e2a49a8bfbeed | # Dataset Card for "tr-nlp-qa-xquad-trquad"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | gorkemgoknar/tr-nlp-qa-xquad-trquad | [
"region:us"
]
| 2023-11-17T09:44:04+00:00 | {"dataset_info": {"features": [{"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11467186, "num_examples": 9498}], "download_size": 1607454, "dataset_size": 11467186}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-17T09:47:45+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "tr-nlp-qa-xquad-trquad"
More Information needed | [
"# Dataset Card for \"tr-nlp-qa-xquad-trquad\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"tr-nlp-qa-xquad-trquad\"\n\nMore Information needed"
]
| [
6,
22
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"tr-nlp-qa-xquad-trquad\"\n\nMore Information needed"
]
|
ce95e518f000b7d51e7de805431af9b039b52685 | # Dataset Card for "fever-sft"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jzjiao/fever-sft | [
"region:us"
]
| 2023-11-17T09:48:48+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "sft_text", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "ground_truth_output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 214479682, "num_examples": 161897}, {"name": "test", "num_bytes": 12532510, "num_examples": 9583}, {"name": "validation", "num_bytes": 12370280, "num_examples": 9525}], "download_size": 63680936, "dataset_size": 239382472}} | 2023-11-17T09:49:28+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "fever-sft"
More Information needed | [
"# Dataset Card for \"fever-sft\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"fever-sft\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"fever-sft\"\n\nMore Information needed"
]
|
0fc0496dec2e0b966641c5e91c36abd9bd505b13 | # Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=10).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | ttrung1402/UIT-ViIC | [
"task_categories:image-to-text",
"size_categories:1K<n<10K",
"language:vi",
"license:cc-by-4.0",
"region:us"
]
| 2023-11-17T09:49:48+00:00 | {"language": ["vi"], "license": "cc-by-4.0", "size_categories": ["1K<n<10K"], "task_categories": ["image-to-text"]} | 2023-11-17T15:14:42+00:00 | []
| [
"vi"
]
| TAGS
#task_categories-image-to-text #size_categories-1K<n<10K #language-Vietnamese #license-cc-by-4.0 #region-us
| # Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
]
| [
"TAGS\n#task_categories-image-to-text #size_categories-1K<n<10K #language-Vietnamese #license-cc-by-4.0 #region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
]
| [
46,
34,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
]
| [
"passage: TAGS\n#task_categories-image-to-text #size_categories-1K<n<10K #language-Vietnamese #license-cc-by-4.0 #region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
]
|
9505f61367911e7ecbc3e480a35b3198dee8840c | annotations_creators:
- expert-generated
language:
- eng
language_creators:
- crowdsourced
- other
license: []
multilinguality:
- monolingual
pretty_name: Nigeria ISP apps reviews
size_categories:
- 1M<n<10M
source_datasets:
- original
tags:
- mtn
- glo
- reviews
- etisalat
- 9mobile
- airtel
- nigeria
- nga
- isp
- apps
task_categories:
- token-classification
- text-classification
task_ids:
- lemmatization
- sentiment-analysis
- sentiment-classification | Geosb/Nigeria_ISPs_app_reviews | [
"region:us"
]
| 2023-11-17T10:16:01+00:00 | {} | 2023-11-17T11:08:46+00:00 | []
| []
| TAGS
#region-us
| annotations_creators:
- expert-generated
language:
- eng
language_creators:
- crowdsourced
- other
license: []
multilinguality:
- monolingual
pretty_name: Nigeria ISP apps reviews
size_categories:
- 1M<n<10M
source_datasets:
- original
tags:
- mtn
- glo
- reviews
- etisalat
- 9mobile
- airtel
- nigeria
- nga
- isp
- apps
task_categories:
- token-classification
- text-classification
task_ids:
- lemmatization
- sentiment-analysis
- sentiment-classification | []
| [
"TAGS\n#region-us \n"
]
| [
6
]
| [
"passage: TAGS\n#region-us \n"
]
|
5bc925db290906fb1d8b3b1cf45a8ff88bad3b0d | # Dataset Card for "zalo_ai_math_alpaca_input_aug"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | MonKira/zalo_ai_math_alpaca_input_aug | [
"region:us"
]
| 2023-11-17T10:40:03+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "Option_A", "dtype": "string"}, {"name": "Option_B", "dtype": "string"}, {"name": "Option_C", "dtype": "string"}, {"name": "Option_D", "dtype": "string"}, {"name": "explanation", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1120039, "num_examples": 1305}], "download_size": 397221, "dataset_size": 1120039}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-17T10:40:09+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "zalo_ai_math_alpaca_input_aug"
More Information needed | [
"# Dataset Card for \"zalo_ai_math_alpaca_input_aug\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"zalo_ai_math_alpaca_input_aug\"\n\nMore Information needed"
]
| [
6,
24
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"zalo_ai_math_alpaca_input_aug\"\n\nMore Information needed"
]
|
885efaf13ab74d7d705f80a133587f8daad95874 | # Dataset Card for "arxiv-2shot-4096"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | anumafzal94/arxiv-2shot-4096 | [
"region:us"
]
| 2023-11-17T10:51:50+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}, {"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "summary", "dtype": "string"}, {"name": "few-shot", "dtype": "bool"}], "splits": [{"name": "test", "num_bytes": 3262821.167598633, "num_examples": 97}, {"name": "train", "num_bytes": 73114333.94539191, "num_examples": 2066}], "download_size": 5283534, "dataset_size": 76377155.11299054}} | 2023-11-17T14:51:17+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "arxiv-2shot-4096"
More Information needed | [
"# Dataset Card for \"arxiv-2shot-4096\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"arxiv-2shot-4096\"\n\nMore Information needed"
]
| [
6,
17
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"arxiv-2shot-4096\"\n\nMore Information needed"
]
|
61a221e314483db1bb455a03b9b6bcb1230d372f | # Python repositories from StarCoderData | mponty/reordered_python_starcoderdata | [
"region:us"
]
| 2023-11-17T11:05:24+00:00 | {"dataset_info": {"features": [{"name": "reponame", "dtype": "string"}, {"name": "files", "list": [{"name": "content", "dtype": "string"}, {"name": "id", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "matching_score", "dtype": "float64"}, {"name": "max_stars_count", "dtype": "int64"}, {"name": "path", "dtype": "string"}]}, {"name": "median_score", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 61453380449, "num_examples": 902466}], "download_size": 20060701005, "dataset_size": 61453380449}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-17T19:53:20+00:00 | []
| []
| TAGS
#region-us
| # Python repositories from StarCoderData | [
"# Python repositories from StarCoderData"
]
| [
"TAGS\n#region-us \n",
"# Python repositories from StarCoderData"
]
| [
6,
10
]
| [
"passage: TAGS\n#region-us \n# Python repositories from StarCoderData"
]
|
8cb8fba6789c923730ba205eb3eab2f496f8851d | # Dataset Card for "c4-cantonese-filterd"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | indiejoseph/c4-cantonese-filtered | [
"region:us"
]
| 2023-11-17T11:18:26+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2304453, "num_examples": 21558}], "download_size": 1820474, "dataset_size": 2304453}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-17T11:18:29+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "c4-cantonese-filterd"
More Information needed | [
"# Dataset Card for \"c4-cantonese-filterd\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"c4-cantonese-filterd\"\n\nMore Information needed"
]
| [
6,
18
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"c4-cantonese-filterd\"\n\nMore Information needed"
]
|
28705a05995f1b720e6b3c75ab482bfacdb2e74e |
**CoNLL-U(ish) file of 954 sentences of 164 texts, containing abbreviations and their extensions.**
# source
The original data was extracted from the [Austrian Biographical Lexicon (ÖBL)](https://www.oeaw.ac.at/acdh/oebl) in the context of the [Austrian Prosopographical Information System (APIS) project](https://www.oeaw.ac.at/acdh/projects/completed-projects/apis).
From there, samples were randomly pulled and annotated for Named Entity Recognition tasks, which form this dataset.
The texts concern numerous smaller biographies in the time period between 19th and early 20th century within historical Austria-Hungary, and were produced by the [Austrian Acadamey of Sciences](https://www.oeaw.ac.at/en) between 1957 and 2023.
The language style is rather condensed and contains a lot of domain-specific abbreviations (some of which were resolved in this related dataset).
Another dataset stemming from this source and containing named entities can be found here: https://huggingface.co/datasets/SteffRhes/APIS_OEBL__Named_Entity_Recognition .
# structure
Tokenized, mostly adhering to CoNLL-U, except for the additions of:
**EXPAN=**
Indicating if a token is an abbreviation.
`EXPAN=O` means the token is no abbreviation.
`EXPAN=B-<EXTENSION>` means the token is an abbreviation, and its extension is `<EXTENSION>`.
**PersonName=**
`PersonName=<YES/NO>` In case of the abbreviation being for a name, this is declared explicitely as resolving the abbreviation into its extension wouldn't require generic language knowledge but contextual one, and hence should be filtered out for NLP training (probably).
**no train, dev, eval split**
We decided against pre-splitting the data into these sets, as their quantities might differ between requirements of various NLP training setups. | SteffRhes/APIS_OEBL__Abbreviations | [
"task_categories:token-classification",
"language:de",
"license:mit",
"region:us"
]
| 2023-11-17T11:24:23+00:00 | {"language": ["de"], "license": "mit", "task_categories": ["token-classification"], "pretty_name": "APIS \u00d6BL Abbreviations"} | 2023-11-29T16:13:47+00:00 | []
| [
"de"
]
| TAGS
#task_categories-token-classification #language-German #license-mit #region-us
|
CoNLL-U(ish) file of 954 sentences of 164 texts, containing abbreviations and their extensions.
# source
The original data was extracted from the Austrian Biographical Lexicon (ÖBL) in the context of the Austrian Prosopographical Information System (APIS) project.
From there, samples were randomly pulled and annotated for Named Entity Recognition tasks, which form this dataset.
The texts concern numerous smaller biographies in the time period between 19th and early 20th century within historical Austria-Hungary, and were produced by the Austrian Acadamey of Sciences between 1957 and 2023.
The language style is rather condensed and contains a lot of domain-specific abbreviations (some of which were resolved in this related dataset).
Another dataset stemming from this source and containing named entities can be found here: URL .
# structure
Tokenized, mostly adhering to CoNLL-U, except for the additions of:
EXPAN=
Indicating if a token is an abbreviation.
'EXPAN=O' means the token is no abbreviation.
'EXPAN=B-<EXTENSION>' means the token is an abbreviation, and its extension is '<EXTENSION>'.
PersonName=
'PersonName=<YES/NO>' In case of the abbreviation being for a name, this is declared explicitely as resolving the abbreviation into its extension wouldn't require generic language knowledge but contextual one, and hence should be filtered out for NLP training (probably).
no train, dev, eval split
We decided against pre-splitting the data into these sets, as their quantities might differ between requirements of various NLP training setups. | [
"# source\n\nThe original data was extracted from the Austrian Biographical Lexicon (ÖBL) in the context of the Austrian Prosopographical Information System (APIS) project.\n\nFrom there, samples were randomly pulled and annotated for Named Entity Recognition tasks, which form this dataset.\n\nThe texts concern numerous smaller biographies in the time period between 19th and early 20th century within historical Austria-Hungary, and were produced by the Austrian Acadamey of Sciences between 1957 and 2023.\n\nThe language style is rather condensed and contains a lot of domain-specific abbreviations (some of which were resolved in this related dataset).\n\nAnother dataset stemming from this source and containing named entities can be found here: URL .",
"# structure\n\nTokenized, mostly adhering to CoNLL-U, except for the additions of:\n\nEXPAN= \n\nIndicating if a token is an abbreviation. \n\n'EXPAN=O' means the token is no abbreviation.\n\n'EXPAN=B-<EXTENSION>' means the token is an abbreviation, and its extension is '<EXTENSION>'.\n\nPersonName=\n\n'PersonName=<YES/NO>' In case of the abbreviation being for a name, this is declared explicitely as resolving the abbreviation into its extension wouldn't require generic language knowledge but contextual one, and hence should be filtered out for NLP training (probably).\n\n\nno train, dev, eval split\n\nWe decided against pre-splitting the data into these sets, as their quantities might differ between requirements of various NLP training setups."
]
| [
"TAGS\n#task_categories-token-classification #language-German #license-mit #region-us \n",
"# source\n\nThe original data was extracted from the Austrian Biographical Lexicon (ÖBL) in the context of the Austrian Prosopographical Information System (APIS) project.\n\nFrom there, samples were randomly pulled and annotated for Named Entity Recognition tasks, which form this dataset.\n\nThe texts concern numerous smaller biographies in the time period between 19th and early 20th century within historical Austria-Hungary, and were produced by the Austrian Acadamey of Sciences between 1957 and 2023.\n\nThe language style is rather condensed and contains a lot of domain-specific abbreviations (some of which were resolved in this related dataset).\n\nAnother dataset stemming from this source and containing named entities can be found here: URL .",
"# structure\n\nTokenized, mostly adhering to CoNLL-U, except for the additions of:\n\nEXPAN= \n\nIndicating if a token is an abbreviation. \n\n'EXPAN=O' means the token is no abbreviation.\n\n'EXPAN=B-<EXTENSION>' means the token is an abbreviation, and its extension is '<EXTENSION>'.\n\nPersonName=\n\n'PersonName=<YES/NO>' In case of the abbreviation being for a name, this is declared explicitely as resolving the abbreviation into its extension wouldn't require generic language knowledge but contextual one, and hence should be filtered out for NLP training (probably).\n\n\nno train, dev, eval split\n\nWe decided against pre-splitting the data into these sets, as their quantities might differ between requirements of various NLP training setups."
]
| [
27,
176,
203
]
| [
"passage: TAGS\n#task_categories-token-classification #language-German #license-mit #region-us \n# source\n\nThe original data was extracted from the Austrian Biographical Lexicon (ÖBL) in the context of the Austrian Prosopographical Information System (APIS) project.\n\nFrom there, samples were randomly pulled and annotated for Named Entity Recognition tasks, which form this dataset.\n\nThe texts concern numerous smaller biographies in the time period between 19th and early 20th century within historical Austria-Hungary, and were produced by the Austrian Acadamey of Sciences between 1957 and 2023.\n\nThe language style is rather condensed and contains a lot of domain-specific abbreviations (some of which were resolved in this related dataset).\n\nAnother dataset stemming from this source and containing named entities can be found here: URL .# structure\n\nTokenized, mostly adhering to CoNLL-U, except for the additions of:\n\nEXPAN= \n\nIndicating if a token is an abbreviation. \n\n'EXPAN=O' means the token is no abbreviation.\n\n'EXPAN=B-<EXTENSION>' means the token is an abbreviation, and its extension is '<EXTENSION>'.\n\nPersonName=\n\n'PersonName=<YES/NO>' In case of the abbreviation being for a name, this is declared explicitely as resolving the abbreviation into its extension wouldn't require generic language knowledge but contextual one, and hence should be filtered out for NLP training (probably).\n\n\nno train, dev, eval split\n\nWe decided against pre-splitting the data into these sets, as their quantities might differ between requirements of various NLP training setups."
]
|
0d01868b58b188f7d0815da29d73ff7b6aa06c1f |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | splash657/emo_w_quotes | [
"region:us"
]
| 2023-11-17T11:46:05+00:00 | {} | 2023-11-17T12:06:10+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
]
| [
6,
34,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
]
|
9cc9f7e5120f18822427bbb81295a6ca6d9c7524 | # Dataset Card for "zlbn_en"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | laitrongduc/zlbn_en | [
"region:us"
]
| 2023-11-17T11:57:45+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 50188488.852, "num_examples": 1362}], "download_size": 48956901, "dataset_size": 50188488.852}} | 2023-11-17T11:58:01+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "zlbn_en"
More Information needed | [
"# Dataset Card for \"zlbn_en\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"zlbn_en\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"zlbn_en\"\n\nMore Information needed"
]
|
7406b2d68094b2984122af6f517f63b38db6aa96 |
This repository contains CebuaNER, the largest gold-standard datasets for named entities in Cebuano. This dataset is used for the [paper](https://arxiv.org/abs/2310.00679) `CebuaNER: A New Baseline Cebuano Named Entity Recognition Model` to be presented at PACLIC 2023, authored by Ma. Beatrice Emanuela N. Pilar, Ellyza Mari J. Papas, Mary Loise Buenaventura, Dane C. Dedoroy, Myron Montefalcon, Jay Rhald Padilla, Lany Maceda, Mideth Abisado, and Joseph Imperial.
## Data
The dataset contribution of this study is a compilation of Cebuano news articles from two local news stations, Yes the Best Dumaguete and
the Filipinas Bisaya. To further increase the data count, we also incorporated another publicly available dataset from [Sunstar Cebu](https://github.com/rjrequina/Cebuano-POS-Tagger/blob/master/eval/data/scraped/news-raw.txt) pre-collected by independent researcher Arjemariel Requina. The total accumulated and filtered size of the Cebuano dataset is 4258 articles. All the dataset used for this study is purely for non-commercial research purposes and under Fair Use.
| Source | Original | Cleaned |
| ------------------------| ------------- | ------- |
| Yes the Best Dumaguete | 1484 | 781 |
| Filipinas Bisaya | 769 | 377 |
| Sunstar | 3100 | 3100 |
## Note on Data Cataloging
Please send an email before submitting this repository to any data cataloging, data aggregation, and benchmarking projects/initiatives. The proponents of the paper of this dataset would like to be acknowledged appropriately or involved in co-authorship.
## Contact
For questions, you may reach the corresponding author of the study at:
**Joseph Marvin Imperial**\
Faculty Member, Department of Computer Science\
Lab Head, NU Human Language Technology Lab\
[email protected] | josephimperial/CebuaNER | [
"size_categories:1K<n<10K",
"license:cc-by-nc-4.0",
"arxiv:2310.00679",
"region:us"
]
| 2023-11-17T11:58:38+00:00 | {"license": "cc-by-nc-4.0", "size_categories": ["1K<n<10K"]} | 2023-11-17T12:07:58+00:00 | [
"2310.00679"
]
| []
| TAGS
#size_categories-1K<n<10K #license-cc-by-nc-4.0 #arxiv-2310.00679 #region-us
| This repository contains CebuaNER, the largest gold-standard datasets for named entities in Cebuano. This dataset is used for the paper 'CebuaNER: A New Baseline Cebuano Named Entity Recognition Model' to be presented at PACLIC 2023, authored by Ma. Beatrice Emanuela N. Pilar, Ellyza Mari J. Papas, Mary Loise Buenaventura, Dane C. Dedoroy, Myron Montefalcon, Jay Rhald Padilla, Lany Maceda, Mideth Abisado, and Joseph Imperial.
Data
----
The dataset contribution of this study is a compilation of Cebuano news articles from two local news stations, Yes the Best Dumaguete and
the Filipinas Bisaya. To further increase the data count, we also incorporated another publicly available dataset from Sunstar Cebu pre-collected by independent researcher Arjemariel Requina. The total accumulated and filtered size of the Cebuano dataset is 4258 articles. All the dataset used for this study is purely for non-commercial research purposes and under Fair Use.
Source: Yes the Best Dumaguete, Original: 1484, Cleaned: 781
Source: Filipinas Bisaya, Original: 769, Cleaned: 377
Source: Sunstar, Original: 3100, Cleaned: 3100
Note on Data Cataloging
-----------------------
Please send an email before submitting this repository to any data cataloging, data aggregation, and benchmarking projects/initiatives. The proponents of the paper of this dataset would like to be acknowledged appropriately or involved in co-authorship.
Contact
-------
For questions, you may reach the corresponding author of the study at:
Joseph Marvin Imperial
Faculty Member, Department of Computer Science
Lab Head, NU Human Language Technology Lab
jrimperial@URL
| []
| [
"TAGS\n#size_categories-1K<n<10K #license-cc-by-nc-4.0 #arxiv-2310.00679 #region-us \n"
]
| [
37
]
| [
"passage: TAGS\n#size_categories-1K<n<10K #license-cc-by-nc-4.0 #arxiv-2310.00679 #region-us \n"
]
|
70e36feec41d0560739487c47eeb5aceef45b347 | # Dataset Card for "LongAlpaca-12k-th"
Thai-translated version of https://huggingface.co/datasets/Yukang/LongAlpaca-12k
Translated by Google translate | thanaphatt1/LongAlpaca-12k-th | [
"region:us"
]
| 2023-11-17T12:19:28+00:00 | {"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1171826005, "num_examples": 11908}], "download_size": 434360238, "dataset_size": 1171826005}} | 2023-11-22T10:03:09+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "LongAlpaca-12k-th"
Thai-translated version of URL
Translated by Google translate | [
"# Dataset Card for \"LongAlpaca-12k-th\"\n\nThai-translated version of URL\nTranslated by Google translate"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"LongAlpaca-12k-th\"\n\nThai-translated version of URL\nTranslated by Google translate"
]
| [
6,
31
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"LongAlpaca-12k-th\"\n\nThai-translated version of URL\nTranslated by Google translate"
]
|
7e807a8468f69f5e949ae5e6b27d59f0d49895e1 |
The EATD Corpus is hosted in [this github repository](https://github.com/speechandlanguageprocessing/ICASSP2022-Depression).
Follow the instructions there to download and unzip the data.
This dataset can be used with the following line of code, changing the path of `data_dir` to the one appropriate to your system:
```python
dataset = load_dataset('jimregan/eatd_corpus', data_dir='/tmp/EATD-Corpus/')
```
| jimregan/eatd_corpus | [
"task_categories:automatic-speech-recognition",
"task_categories:audio-classification",
"language:zh",
"license:other",
"region:us"
]
| 2023-11-17T12:24:04+00:00 | {"language": ["zh"], "license": "other", "task_categories": ["automatic-speech-recognition", "audio-classification"]} | 2023-11-17T12:32:03+00:00 | []
| [
"zh"
]
| TAGS
#task_categories-automatic-speech-recognition #task_categories-audio-classification #language-Chinese #license-other #region-us
|
The EATD Corpus is hosted in this github repository.
Follow the instructions there to download and unzip the data.
This dataset can be used with the following line of code, changing the path of 'data_dir' to the one appropriate to your system:
| []
| [
"TAGS\n#task_categories-automatic-speech-recognition #task_categories-audio-classification #language-Chinese #license-other #region-us \n"
]
| [
44
]
| [
"passage: TAGS\n#task_categories-automatic-speech-recognition #task_categories-audio-classification #language-Chinese #license-other #region-us \n"
]
|
84f734cc928cf2e56cf3348cf0cfddbf65344840 | # Dataset Card for "medflex-test"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kosta-naumenko/medflex-test | [
"region:us"
]
| 2023-11-17T12:38:41+00:00 | {"dataset_info": {"features": [{"name": "tokens", "sequence": "string"}, {"name": "ner_tags", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 610623, "num_examples": 498}], "download_size": 103974, "dataset_size": 610623}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-17T12:38:43+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "medflex-test"
More Information needed | [
"# Dataset Card for \"medflex-test\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"medflex-test\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"medflex-test\"\n\nMore Information needed"
]
|
9896768988dea5f14d0852dee3a690b5787aba12 | # Dataset Card for "task_desc_gen"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | lissadesu/task_desc_gen | [
"region:us"
]
| 2023-11-17T13:04:36+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1958274, "num_examples": 979}], "download_size": 622077, "dataset_size": 1958274}} | 2023-11-17T13:06:20+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "task_desc_gen"
More Information needed | [
"# Dataset Card for \"task_desc_gen\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"task_desc_gen\"\n\nMore Information needed"
]
| [
6,
17
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"task_desc_gen\"\n\nMore Information needed"
]
|
f24866d321fbd75fe25dab3c19fd3aeb405ac2e7 | # Dataset Card for "LLM_data3"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | GHOFRANEE/LLM_data3 | [
"region:us"
]
| 2023-11-17T13:59:17+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "context", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 18164, "num_examples": 9}], "download_size": 19461, "dataset_size": 18164}} | 2023-11-17T20:03:24+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "LLM_data3"
More Information needed | [
"# Dataset Card for \"LLM_data3\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"LLM_data3\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"LLM_data3\"\n\nMore Information needed"
]
|
85427d688eb84a5ec506ba7c58eca5345b8008bc |

This dataset has only been sourced from open source licensed datasets like flan. No Openai input. | KnutJaegersberg/Auton | [
"license:mit",
"region:us"
]
| 2023-11-17T14:36:58+00:00 | {"license": "mit"} | 2023-12-12T19:34:37+00:00 | []
| []
| TAGS
#license-mit #region-us
|
!image/png
This dataset has only been sourced from open source licensed datasets like flan. No Openai input. | []
| [
"TAGS\n#license-mit #region-us \n"
]
| [
11
]
| [
"passage: TAGS\n#license-mit #region-us \n"
]
|
c3532b260f863b3adfa2455dba3725146ec6cd53 | # Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed] | vanesa1221/llama2-unsaac | [
"size_categories:n<1K",
"language:es",
"license:mit",
"region:us"
]
| 2023-11-17T14:43:27+00:00 | {"language": ["es"], "license": "mit", "size_categories": ["n<1K"]} | 2023-11-17T14:57:13+00:00 | []
| [
"es"
]
| TAGS
#size_categories-n<1K #language-Spanish #license-mit #region-us
| # Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License: | [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:"
]
| [
"TAGS\n#size_categories-n<1K #language-Spanish #license-mit #region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:"
]
| [
26,
34,
4,
40
]
| [
"passage: TAGS\n#size_categories-n<1K #language-Spanish #license-mit #region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:"
]
|
eb9c50806f3377d34c94d3e67df70e8fb4535018 | # Instructions dataset
This to combine multiple instruction dataset into one JSONL file to train, notebooks at https://github.com/mesolitica/malaysian-dataset/tree/master/llm-instruction | mesolitica/instructions-dataset | [
"region:us"
]
| 2023-11-17T14:46:45+00:00 | {} | 2024-01-20T08:19:47+00:00 | []
| []
| TAGS
#region-us
| # Instructions dataset
This to combine multiple instruction dataset into one JSONL file to train, notebooks at URL | [
"# Instructions dataset\n\nThis to combine multiple instruction dataset into one JSONL file to train, notebooks at URL"
]
| [
"TAGS\n#region-us \n",
"# Instructions dataset\n\nThis to combine multiple instruction dataset into one JSONL file to train, notebooks at URL"
]
| [
6,
27
]
| [
"passage: TAGS\n#region-us \n# Instructions dataset\n\nThis to combine multiple instruction dataset into one JSONL file to train, notebooks at URL"
]
|
83d1632aa63006a6bd7f4ce0ae6a8b1cf73db72b |
---
tags:
- gpt-4
- janet
---
# Description
10k user questions for a single class classification task.
All questions are related to a hypothetical game in whichn the user is a sniper
and has to answer questions from a commander. The questions in the dataset
are the ones that the user is expected to answer.
# Schema
```
{
"question": Ds.Value("string"),
"topic": Ds.ClassLabel(names=[
"about_to_shoot",
"ballistics",
"civilians",
"countersniping",
"dark_hair",
"description_age",
"description_body",
"description_crime",
"description_face",
"description_height",
"description_no_hair",
"description_pose",
"description_race",
"description_request",
"description_request_armed",
"description_request_behavior",
"description_request_clothing",
"description_request_companions",
"description_request_location",
"description_request_tattoo",
"description_request_transport",
"description_right_handed",
"description_sex",
"description_skin_color",
"description_weight",
"easter_egg_go_on_date",
"extraction",
"goodbye",
"hello",
"how_are_you",
"light_hair",
"permission_to_fire",
"request_change_location",
"returning_to_base",
"say_that_again",
"searching",
"secondary_targets",
"target_down",
"target_down_negative",
"target_identified",
"target_identified_maybe",
"target_identified_negative",
"target_name",
"thanks",
"thanks_and_goodbye",
"time_constraint",
"wearing_eyewear",
"wearing_eyewear_negative",
"what_to_do",
],),
}
```
# Citation
```
@misc{JanetTextClassification10k,
title = {JanetTextClassification10k: A Dataset of user questions for a single class classification task.},
author = {Kieran Donaldson and Piotr Trochim},
year = {2023},
publisher = {HuggingFace},
journal = {HuggingFace repository},
howpublished = {\\url{https://huggingface.co/datasets/IconicAI/janet-textclassification-10k}},
}
``` | IconicAI/janet-textclassification-10k | [
"task_categories:text-classification",
"region:us"
]
| 2023-11-17T15:12:52+00:00 | {"task_categories": ["text-classification"]} | 2023-12-12T13:27:51+00:00 | []
| []
| TAGS
#task_categories-text-classification #region-us
|
---
tags:
- gpt-4
- janet
---
# Description
10k user questions for a single class classification task.
All questions are related to a hypothetical game in whichn the user is a sniper
and has to answer questions from a commander. The questions in the dataset
are the ones that the user is expected to answer.
# Schema
| [
"# Description\n10k user questions for a single class classification task.\n\nAll questions are related to a hypothetical game in whichn the user is a sniper \nand has to answer questions from a commander. The questions in the dataset\nare the ones that the user is expected to answer.",
"# Schema"
]
| [
"TAGS\n#task_categories-text-classification #region-us \n",
"# Description\n10k user questions for a single class classification task.\n\nAll questions are related to a hypothetical game in whichn the user is a sniper \nand has to answer questions from a commander. The questions in the dataset\nare the ones that the user is expected to answer.",
"# Schema"
]
| [
17,
60,
3
]
| [
"passage: TAGS\n#task_categories-text-classification #region-us \n# Description\n10k user questions for a single class classification task.\n\nAll questions are related to a hypothetical game in whichn the user is a sniper \nand has to answer questions from a commander. The questions in the dataset\nare the ones that the user is expected to answer.# Schema"
]
|
70bba665e03d8b61fb79a73d6939fb45bf7530de | > Written with [StackEdit](https://stackedit.io/).
> ## [List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words_V2](https://github.com/LDNOOBWV2/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words_V2#list-of-dirty-naughty-obscene-and-otherwise-bad-words_v2)
This list of words is a follow-up and extension of the Shutterstock [List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words](https://github.com/LDNOOBW/List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words/tree/master) as that list is not maintained anymore. As there are many profanity word lists around on the web (and many not maintained) their content was crabbed and joined here together (see the source list below).
As the opinion on which words should be in such lists varies between culture, language, and geographies, feel free to extend them to your needs, hopefully getting a lot of feedback.
The lists need reviews from native speakers. It would be great to collect more words and even get more languages (**75** right now, with over **50k words** alltogether).
The long list of English words shows that people got very creative to get around profanity filters. The best way to use these hard-coded word lists is to use them as an additional quality criterion for filtering texts like it is done in [RedPajama](https://github.com/togethercomputer/RedPajama-Data) data set or use them for ML building profanity filters.
The next TODO will be to create some regular expressions for often-used spelling variations.
### Structure and Format
- filename is the **iso-code** of the country
- file extension is **".txt"**
- **utf-8** encoded
- all words are **lowercase**
- one expression per line
- all words contained in the English "***en.txt***" file are **excluded** in the other language files
- often used words where the classification as a profane word is doubtful, there is a separate csv file
- the csv-file is: [questionable_international_words.csv](questionable_international_words.csv)
- separator is the comma "**,**"
- **51** words for several languages (see table below)
- the header line contains the iso-code of the language, a classification column (*category*), and a *remark* column
- these words are **NOT** included in the language-text-files, e.g. "*.txt"
- when I couldn't find a translation, the field contains the string: **<NO_TRANSLATION>**
### Languages Files Overview
language | count | filename | in csv-file | remark
--- | --- | --- | --- | ---
[Afrikaans](data/af.txt) | 256 | af | Y|
[Albanian](data/sq.txt) | 179 | sq | Y|
[Algerian](data/dz.txt) | 86 | dz | N|
[Amharic](data/am.txt) | 50 | am | N|
[Arabic](data/ar.txt) |1248 | ar | N|
[Armenian](data/hy.txt) | 106 | hy | Y|
[Australian Kriol](data/rop.txt) | 16 | rop| N|
[Azerbaijanian](data/az.txt) | 37 | az | N|
[Basque](data/eu.txt) | 48 | eu | N|
[Belorussian](data/be.txt) | 118 | be | N|
[Bulgarian](data/bg.txt) | 336 | bg | Y|
[Burmese](data/my.txt) | 81 | my | N|
[Cambodian](data/kh.txt) | 132 | kh | N|
[Catalan](data/ca.txt) | 143 | ca | Y|
[Cebuano](data/ceb.txt) | 18 | ceb| N|
[Chinese](data/zh.txt) |1811 | zh | Y|
[Croatian](data/hr.txt) | 267 | hr | Y|
[Czech](data/cs.txt) | 241 | cs | Y|
[Danish](data/da.txt) | 185 | da | Y|
[Dutch](data/nl.txt) |1224 | nl | Y|
[English](data/en.txt) |12996| en | Y| various spelling variations, does not contain Spanish (es) words
[Esperanto](data/eo.txt) | 50 | eo | N|
[Estonian](data/et.txt) | 174 | et | Y|
[Filipino](data/fil.txt) | 165 | fil| Y|
[Finnish](data/fi.txt) | 317 | fi | Y|
[French](data/fr.txt) |3708 | fr | Y| many spelling variations
[Gaelic](data/gd.txt) | 88 | gd | N|
[Galician](data/gl.txt) | 74 | gl | N|
[German](data/de.txt) | 622 | de | Y|
[Greek](data/el.txt) | 247 | el | Y|
[Hebrew](data/yid.txt) | 173 | yid| N|
[Hindi](data/hi.txt) | 771 | hi | Y|
[Hungarian](data/hu.txt) | 296 | hu | Y|
[Icelandic](data/is.txt) | 137 | is | Y|
[Italian](data/it.txt) |1755 | it | Y|
[Indonesian](data/id.txt) | 582 | id | Y|
[Japanese](data/ja.txt) | 468 | ja | Y|
[Kabyle](data/kab.txt) | 22 | kab| N|
[Klingon](data/tlh.txt) | 33 | tlh| N|
[Korean](data/ko.txt) |3094 | ko | Y|
[Latin](data/la.txt) | 103 | la | N|
[Latvian](data/lv.txt) | 198 | lv | Y|
[Lithuanian](data/lt.txt) | 158 | lt | Y|
[Macedonian](data/mk.txt) | 192 | mk | N|
[Malay](data/ms.txt) | 201 | ms | Y|
[Malayalam](data/ml.txt) | 338 | ml | Y|
[Maltese](data/mt.txt) | 132 | mt | Y|
[Maori](data/mi.txt) | 58 | mi | Y|
[Marathi](data/mr.txt) | 95 | mr | Y|
[Mongolian](data/mn.txt) | 277 | mn | N|
[Norwegian](data/no.txt) | 171 | no | Y|
[Persian](data/fa.txt) | 629 | fa | N|
[Pictrain-Norfolk](data/pih.txt) | 14 | pih| N|
[Piya-Kwonci](data/piy.txt) | 13 | piy| N|
[Polish](data/pl.txt) |8971 | pl | Y| different grammatical variations
[Portuguese](data/pt.txt) | 575 | pt | Y| including Brasilian
[Romanian](data/ro.txt) | 290 | ro | Y|
[Russian](data/ru.txt) |4948 | ru | Y|
[Samoan](data/sm.txt) | 116 | sm | Y|
[Serbian](data/sr.txt) | 459 | sr | Y| sr_k & sr_l in csv file
[Slovak](data/sk.txt) | 586 | sk | Y|
[Slovene](data/sl.txt) | 167 | sl | Y|
[Spanish](data/es.txt) |1677 | es | Y| including Middle- and South American
[Swedish](data/sv.txt) | 245 | sv | Y|
[Tamil](data/ta.txt) | 119 | ta | N|
[Telugu](data/te.txt) | 317 | te | Y|
[Tetum](data/tet.txt) | 11 | tet| N|
[Thai](data/th.txt) |1715 | th | Y|
[Tongan](data/to.txt) | 68 | to | N|
[Turkish](data/tr.txt) | 370 | tr | Y|
[Ukrainian](data/uk.txt) | 205 | uk | Y|
[Uzbek](data/uz.txt) | 102 | uz | N|
[Vietnamese](data/vi.txt) | 790 | vi | Y|
[Welsh](data/cy.txt) | 169 | cy | Y|
[Zulu](data/zu.txt) | 115 | zu | N|
### Categories in *questionable_international_words.csv*
The categories used are:
- **cul**: cultural differences
- **dm**: drugs & medicine
- **his**: historical
- **leg**: Legislative term
- **mab**: medical, anatomic, biological term
- **pol**: political
- **rel**: religious
- **so**: sexual orientation
- **vm**: various meanings
This is just an ad hoc classification where several expressions can be in different categories. | PeterGraebner/LDNOOBW_V2 | [
"size_categories:10K<n<100K",
"language:af",
"language:az",
"language:am",
"language:be",
"language:bg",
"language:dz",
"language:eu",
"language:my",
"language:ca",
"language:cs",
"language:cy",
"language:hr",
"language:zh",
"language:da",
"language:de",
"language:nl",
"language:el",
"language:en",
"language:eo",
"language:es",
"language:et",
"language:fa",
"language:fi",
"language:fr",
"language:gl",
"language:gd",
"language:hi",
"language:hy",
"language:hu",
"language:id",
"language:is",
"language:it",
"language:ja",
"language:ko",
"language:la",
"language:lt",
"language:lv",
"language:mi",
"language:mk",
"language:ml",
"language:ms",
"language:mt",
"language:mr",
"language:mn",
"language:no",
"language:pl",
"language:pt",
"language:ro",
"language:ru",
"language:sk",
"language:sl",
"language:sm",
"language:sq",
"language:te",
"language:ta",
"language:to",
"language:tr",
"language:uk",
"language:uz",
"language:vi",
"language:yid",
"language:zu",
"license:cc0-1.0",
"region:us"
]
| 2023-11-17T16:06:07+00:00 | {"language": ["af", "az", "am", "be", "bg", "dz", "eu", "my", "ca", "cs", "cy", "hr", "zh", "da", "de", "nl", "el", "en", "eo", "es", "et", "fa", "fi", "fr", "gl", "gd", "hi", "hy", "hu", "id", "is", "it", "ja", "ko", "la", "lt", "lv", "mi", "mk", "ml", "ms", "mt", "mr", "mn", "no", "pl", "pt", "ro", "ru", "sk", "sl", "sm", "sq", "te", "ta", "to", "tr", "uk", "uz", "vi", "yid", "zu"], "license": "cc0-1.0", "size_categories": ["10K<n<100K"], "pretty_name": "List of Dirty Naughty Obscene and Otherwise Bad Words V2"} | 2023-11-24T15:00:57+00:00 | []
| [
"af",
"az",
"am",
"be",
"bg",
"dz",
"eu",
"my",
"ca",
"cs",
"cy",
"hr",
"zh",
"da",
"de",
"nl",
"el",
"en",
"eo",
"es",
"et",
"fa",
"fi",
"fr",
"gl",
"gd",
"hi",
"hy",
"hu",
"id",
"is",
"it",
"ja",
"ko",
"la",
"lt",
"lv",
"mi",
"mk",
"ml",
"ms",
"mt",
"mr",
"mn",
"no",
"pl",
"pt",
"ro",
"ru",
"sk",
"sl",
"sm",
"sq",
"te",
"ta",
"to",
"tr",
"uk",
"uz",
"vi",
"yid",
"zu"
]
| TAGS
#size_categories-10K<n<100K #language-Afrikaans #language-Azerbaijani #language-Amharic #language-Belarusian #language-Bulgarian #language-Dzongkha #language-Basque #language-Burmese #language-Catalan #language-Czech #language-Welsh #language-Croatian #language-Chinese #language-Danish #language-German #language-Dutch #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Persian #language-Finnish #language-French #language-Galician #language-Scottish Gaelic #language-Hindi #language-Armenian #language-Hungarian #language-Indonesian #language-Icelandic #language-Italian #language-Japanese #language-Korean #language-Latin #language-Lithuanian #language-Latvian #language-Maori #language-Macedonian #language-Malayalam #language-Malay (macrolanguage) #language-Maltese #language-Marathi #language-Mongolian #language-Norwegian #language-Polish #language-Portuguese #language-Romanian #language-Russian #language-Slovak #language-Slovenian #language-Samoan #language-Albanian #language-Telugu #language-Tamil #language-Tonga (Tonga Islands) #language-Turkish #language-Ukrainian #language-Uzbek #language-Vietnamese #language-Yiddish #language-Zulu #license-cc0-1.0 #region-us
|
>
> Written with StackEdit.
>
>
> List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words\_V2
> ---------------------------------------------------------
>
>
>
This list of words is a follow-up and extension of the Shutterstock List-of-Dirty-Naughty-Obscene-and-Otherwise-Bad-Words as that list is not maintained anymore. As there are many profanity word lists around on the web (and many not maintained) their content was crabbed and joined here together (see the source list below).
As the opinion on which words should be in such lists varies between culture, language, and geographies, feel free to extend them to your needs, hopefully getting a lot of feedback.
The lists need reviews from native speakers. It would be great to collect more words and even get more languages (75 right now, with over 50k words alltogether).
The long list of English words shows that people got very creative to get around profanity filters. The best way to use these hard-coded word lists is to use them as an additional quality criterion for filtering texts like it is done in RedPajama data set or use them for ML building profanity filters.
The next TODO will be to create some regular expressions for often-used spelling variations.
### Structure and Format
* filename is the iso-code of the country
* file extension is ".txt"
* utf-8 encoded
* all words are lowercase
* one expression per line
* all words contained in the English "*URL*" file are excluded in the other language files
* often used words where the classification as a profane word is doubtful, there is a separate csv file
* the csv-file is: questionable\_international\_words.csv
* separator is the comma ","
* 51 words for several languages (see table below)
* the header line contains the iso-code of the language, a classification column (*category*), and a *remark* column
* these words are NOT included in the language-text-files, e.g. "\*.txt"
* when I couldn't find a translation, the field contains the string: <NO\_TRANSLATION>
### Languages Files Overview
### Categories in *questionable\_international\_words.csv*
The categories used are:
* cul: cultural differences
* dm: drugs & medicine
* his: historical
* leg: Legislative term
* mab: medical, anatomic, biological term
* pol: political
* rel: religious
* so: sexual orientation
* vm: various meanings
This is just an ad hoc classification where several expressions can be in different categories.
| [
"### Structure and Format\n\n\n* filename is the iso-code of the country\n* file extension is \".txt\"\n* utf-8 encoded\n* all words are lowercase\n* one expression per line\n* all words contained in the English \"*URL*\" file are excluded in the other language files\n* often used words where the classification as a profane word is doubtful, there is a separate csv file\n* the csv-file is: questionable\\_international\\_words.csv\n* separator is the comma \",\"\n* 51 words for several languages (see table below)\n* the header line contains the iso-code of the language, a classification column (*category*), and a *remark* column\n* these words are NOT included in the language-text-files, e.g. \"\\*.txt\"\n* when I couldn't find a translation, the field contains the string: <NO\\_TRANSLATION>",
"### Languages Files Overview",
"### Categories in *questionable\\_international\\_words.csv*\n\n\nThe categories used are:\n\n\n* cul: cultural differences\n* dm: drugs & medicine\n* his: historical\n* leg: Legislative term\n* mab: medical, anatomic, biological term\n* pol: political\n* rel: religious\n* so: sexual orientation\n* vm: various meanings\n\n\nThis is just an ad hoc classification where several expressions can be in different categories."
]
| [
"TAGS\n#size_categories-10K<n<100K #language-Afrikaans #language-Azerbaijani #language-Amharic #language-Belarusian #language-Bulgarian #language-Dzongkha #language-Basque #language-Burmese #language-Catalan #language-Czech #language-Welsh #language-Croatian #language-Chinese #language-Danish #language-German #language-Dutch #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Persian #language-Finnish #language-French #language-Galician #language-Scottish Gaelic #language-Hindi #language-Armenian #language-Hungarian #language-Indonesian #language-Icelandic #language-Italian #language-Japanese #language-Korean #language-Latin #language-Lithuanian #language-Latvian #language-Maori #language-Macedonian #language-Malayalam #language-Malay (macrolanguage) #language-Maltese #language-Marathi #language-Mongolian #language-Norwegian #language-Polish #language-Portuguese #language-Romanian #language-Russian #language-Slovak #language-Slovenian #language-Samoan #language-Albanian #language-Telugu #language-Tamil #language-Tonga (Tonga Islands) #language-Turkish #language-Ukrainian #language-Uzbek #language-Vietnamese #language-Yiddish #language-Zulu #license-cc0-1.0 #region-us \n",
"### Structure and Format\n\n\n* filename is the iso-code of the country\n* file extension is \".txt\"\n* utf-8 encoded\n* all words are lowercase\n* one expression per line\n* all words contained in the English \"*URL*\" file are excluded in the other language files\n* often used words where the classification as a profane word is doubtful, there is a separate csv file\n* the csv-file is: questionable\\_international\\_words.csv\n* separator is the comma \",\"\n* 51 words for several languages (see table below)\n* the header line contains the iso-code of the language, a classification column (*category*), and a *remark* column\n* these words are NOT included in the language-text-files, e.g. \"\\*.txt\"\n* when I couldn't find a translation, the field contains the string: <NO\\_TRANSLATION>",
"### Languages Files Overview",
"### Categories in *questionable\\_international\\_words.csv*\n\n\nThe categories used are:\n\n\n* cul: cultural differences\n* dm: drugs & medicine\n* his: historical\n* leg: Legislative term\n* mab: medical, anatomic, biological term\n* pol: political\n* rel: religious\n* so: sexual orientation\n* vm: various meanings\n\n\nThis is just an ad hoc classification where several expressions can be in different categories."
]
| [
382,
218,
8,
100
]
| [
"passage: TAGS\n#size_categories-10K<n<100K #language-Afrikaans #language-Azerbaijani #language-Amharic #language-Belarusian #language-Bulgarian #language-Dzongkha #language-Basque #language-Burmese #language-Catalan #language-Czech #language-Welsh #language-Croatian #language-Chinese #language-Danish #language-German #language-Dutch #language-Modern Greek (1453-) #language-English #language-Esperanto #language-Spanish #language-Estonian #language-Persian #language-Finnish #language-French #language-Galician #language-Scottish Gaelic #language-Hindi #language-Armenian #language-Hungarian #language-Indonesian #language-Icelandic #language-Italian #language-Japanese #language-Korean #language-Latin #language-Lithuanian #language-Latvian #language-Maori #language-Macedonian #language-Malayalam #language-Malay (macrolanguage) #language-Maltese #language-Marathi #language-Mongolian #language-Norwegian #language-Polish #language-Portuguese #language-Romanian #language-Russian #language-Slovak #language-Slovenian #language-Samoan #language-Albanian #language-Telugu #language-Tamil #language-Tonga (Tonga Islands) #language-Turkish #language-Ukrainian #language-Uzbek #language-Vietnamese #language-Yiddish #language-Zulu #license-cc0-1.0 #region-us \n"
]
|
fd4dad81de0de3acd7ae18d981ab238cf16e01f1 | CORPUS 1A: A corpus of President Clinton's terrorism-related discourse, for a historical research project.
Period: January 20, 1997 (Clinton’s second term inauguration day) – January 20, 2001 (the day Clinton left office).
Search parameters: All documents on the American Presidency Project site within the above timeframe returned through a keyword search ‘terror*’
using the wildcard star to return all variants, terrorism, terrorist, etc. Results were further refined to only those associated with
William J. Clinton’s name, so as to filter out Bush’s campaign speeches. Further refined by only those documents tagged as
‘spoken addresses or remarks,’ (to filter out noise from some policy papers, and bureaucratic writs and documents,
focusing instead on the core, the heart of executive discourse, straight from his speechwriters at the heart of the west-wing).
Also, for the sake of a coherent voice in the fine-tuning data.
Composition: a total of 388 records were returned. (including a range of spoken remarks from state of the union addresses,
to remarks to various communities all around the US, to remarks to the press, remarks following discussions with foreign dignitaries, etc).
Here’s the link to these search results: https://www.presidency.ucsb.edu/advanced-search?field-keywords=terror%2A&field-keywords2=&field-keywords3=&from%5Bdate%5D=01-20-1997&to%5Bdate%5D=01-20-2001&person2=&items_per_page=100&f%5B0%5D=field_docs_person%3A200298&f%5B1%5D=field_docs_category%3A8
Word count pre-cleaning: 955,714
Word count post-cleaning: 958,876 (likely due to white-space normalisation as part of the cleaning process). | GPT-JF/corpus_1A | [
"region:us"
]
| 2023-11-17T16:16:10+00:00 | {} | 2023-12-18T15:30:18+00:00 | []
| []
| TAGS
#region-us
| CORPUS 1A: A corpus of President Clinton's terrorism-related discourse, for a historical research project.
Period: January 20, 1997 (Clinton’s second term inauguration day) – January 20, 2001 (the day Clinton left office).
Search parameters: All documents on the American Presidency Project site within the above timeframe returned through a keyword search ‘terror*’
using the wildcard star to return all variants, terrorism, terrorist, etc. Results were further refined to only those associated with
William J. Clinton’s name, so as to filter out Bush’s campaign speeches. Further refined by only those documents tagged as
‘spoken addresses or remarks,’ (to filter out noise from some policy papers, and bureaucratic writs and documents,
focusing instead on the core, the heart of executive discourse, straight from his speechwriters at the heart of the west-wing).
Also, for the sake of a coherent voice in the fine-tuning data.
Composition: a total of 388 records were returned. (including a range of spoken remarks from state of the union addresses,
to remarks to various communities all around the US, to remarks to the press, remarks following discussions with foreign dignitaries, etc).
Here’s the link to these search results: URL
Word count pre-cleaning: 955,714
Word count post-cleaning: 958,876 (likely due to white-space normalisation as part of the cleaning process). | []
| [
"TAGS\n#region-us \n"
]
| [
6
]
| [
"passage: TAGS\n#region-us \n"
]
|
6e07fb73d756f2505ebafa854ce023e328a808f1 | # Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | vanesa1221/admision-unsaac | [
"task_categories:question-answering",
"size_categories:n<1K",
"language:es",
"region:us"
]
| 2023-11-17T16:34:43+00:00 | {"language": ["es"], "size_categories": ["n<1K"], "task_categories": ["question-answering"]} | 2023-11-17T16:50:34+00:00 | []
| [
"es"
]
| TAGS
#task_categories-question-answering #size_categories-n<1K #language-Spanish #region-us
| # Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
]
| [
"TAGS\n#task_categories-question-answering #size_categories-n<1K #language-Spanish #region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
]
| [
33,
34,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
]
| [
"passage: TAGS\n#task_categories-question-answering #size_categories-n<1K #language-Spanish #region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
]
|
24e77f0b8f16fa45b63b0f5814c0918fce97567f | # NB Samtale — Conversations
This dataset contains extracted and cleaned conversations from the [NB Samtale corpus](https://www.nb.no/sprakbanken/en/resource-catalogue/oai-nb-no-sbr-85/). The original is a speech corpus made by the Language Bank at the National Library of Norway. The corpus contains orthographically transcribed speech from podcasts and recordings of live events.
| ltg/nb-samtale-conversations | [
"task_categories:conversational",
"size_categories:1K<n<10K",
"language:no",
"language:nb",
"language:nn",
"license:cc0-1.0",
"region:us"
]
| 2023-11-17T16:45:46+00:00 | {"language": ["no", "nb", "nn"], "license": "cc0-1.0", "size_categories": ["1K<n<10K"], "task_categories": ["conversational"], "pretty_name": "NB Samtale \u2014 Conversations"} | 2023-11-18T03:17:41+00:00 | []
| [
"no",
"nb",
"nn"
]
| TAGS
#task_categories-conversational #size_categories-1K<n<10K #language-Norwegian #language-Norwegian Bokmål #language-Norwegian Nynorsk #license-cc0-1.0 #region-us
| # NB Samtale — Conversations
This dataset contains extracted and cleaned conversations from the NB Samtale corpus. The original is a speech corpus made by the Language Bank at the National Library of Norway. The corpus contains orthographically transcribed speech from podcasts and recordings of live events.
| [
"# NB Samtale — Conversations\n\nThis dataset contains extracted and cleaned conversations from the NB Samtale corpus. The original is a speech corpus made by the Language Bank at the National Library of Norway. The corpus contains orthographically transcribed speech from podcasts and recordings of live events."
]
| [
"TAGS\n#task_categories-conversational #size_categories-1K<n<10K #language-Norwegian #language-Norwegian Bokmål #language-Norwegian Nynorsk #license-cc0-1.0 #region-us \n",
"# NB Samtale — Conversations\n\nThis dataset contains extracted and cleaned conversations from the NB Samtale corpus. The original is a speech corpus made by the Language Bank at the National Library of Norway. The corpus contains orthographically transcribed speech from podcasts and recordings of live events."
]
| [
58,
67
]
| [
"passage: TAGS\n#task_categories-conversational #size_categories-1K<n<10K #language-Norwegian #language-Norwegian Bokmål #language-Norwegian Nynorsk #license-cc0-1.0 #region-us \n# NB Samtale — Conversations\n\nThis dataset contains extracted and cleaned conversations from the NB Samtale corpus. The original is a speech corpus made by the Language Bank at the National Library of Norway. The corpus contains orthographically transcribed speech from podcasts and recordings of live events."
]
|
dabec6f05dbf79a53e09ac66df5f20891ac5fc29 | This custom pippa dataset is from Undi of TheBloke Discord | Redwood0/pippa_custom | [
"task_categories:conversational",
"language:en",
"license:apache-2.0",
"not-for-all-audiences",
"conversational",
"roleplay",
"custom-format",
"region:us"
]
| 2023-11-17T17:24:28+00:00 | {"language": ["en"], "license": "apache-2.0", "task_categories": ["conversational"], "tags": ["not-for-all-audiences", "conversational", "roleplay", "custom-format"], "viewer": true} | 2023-12-08T21:03:15+00:00 | []
| [
"en"
]
| TAGS
#task_categories-conversational #language-English #license-apache-2.0 #not-for-all-audiences #conversational #roleplay #custom-format #region-us
| This custom pippa dataset is from Undi of TheBloke Discord | []
| [
"TAGS\n#task_categories-conversational #language-English #license-apache-2.0 #not-for-all-audiences #conversational #roleplay #custom-format #region-us \n"
]
| [
50
]
| [
"passage: TAGS\n#task_categories-conversational #language-English #license-apache-2.0 #not-for-all-audiences #conversational #roleplay #custom-format #region-us \n"
]
|
8af7d5fb466b9ffec4416428e4c1adb6ba415b32 | # Dataset Card for "oxfordpets_caption"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | cherry0324/oxfordpets_caption | [
"region:us"
]
| 2023-11-17T17:59:58+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 325420058.632, "num_examples": 3693}], "download_size": 393978599, "dataset_size": 325420058.632}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-17T18:06:41+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "oxfordpets_caption"
More Information needed | [
"# Dataset Card for \"oxfordpets_caption\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"oxfordpets_caption\"\n\nMore Information needed"
]
| [
6,
17
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"oxfordpets_caption\"\n\nMore Information needed"
]
|
11dc43a7414d21b5e613241dc1e412f796f206d3 | # Dataset Card for "data_deduplicated_part02"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | phanvancongthanh/data_deduplicated_part02 | [
"region:us"
]
| 2023-11-17T18:42:23+00:00 | {"dataset_info": {"features": [{"name": "smiles", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5620696040, "num_examples": 134992416}], "download_size": 2966557230, "dataset_size": 5620696040}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-17T18:44:54+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "data_deduplicated_part02"
More Information needed | [
"# Dataset Card for \"data_deduplicated_part02\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"data_deduplicated_part02\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"data_deduplicated_part02\"\n\nMore Information needed"
]
|
044b0f95aaa8ccb17df8965701cdb9af64f8bdc1 | This dataset contains question/answer pairs from a French legal protection insurance (https://www.service-public.fr/particuliers/vosdroits/F3049?lang=en).
The objective of this dataset is to contribute to open source research projects aiming to, for instance:
* fine-tune LLMs on high-quality datasets, specializing them in the insurance domain
* develop new question/answer applications using Retrieval Augmented Generation (RAG) for insurance contracts
* assess the knowledge of language models in the insurance field
* more generally, apply LLMs to the insurance domain for better understanding and increased transparency of this industry.
Other datasets of the same kind are also available - or will be available soon - and are part of this research effort. See here: https://huggingface.co/collections/zelros/legal-protection-insurance-6536e8f389dd48faca78447e
Here is an example of usages of this dataset: https://huggingface.co/spaces/zelros/The-legal-protection-insurance-comparator | zelros/pj-maif | [
"insurance",
"region:us"
]
| 2023-11-17T19:16:15+00:00 | {"tags": ["insurance"]} | 2023-11-17T20:49:14+00:00 | []
| []
| TAGS
#insurance #region-us
| This dataset contains question/answer pairs from a French legal protection insurance (URL
The objective of this dataset is to contribute to open source research projects aiming to, for instance:
* fine-tune LLMs on high-quality datasets, specializing them in the insurance domain
* develop new question/answer applications using Retrieval Augmented Generation (RAG) for insurance contracts
* assess the knowledge of language models in the insurance field
* more generally, apply LLMs to the insurance domain for better understanding and increased transparency of this industry.
Other datasets of the same kind are also available - or will be available soon - and are part of this research effort. See here: URL
Here is an example of usages of this dataset: URL | []
| [
"TAGS\n#insurance #region-us \n"
]
| [
9
]
| [
"passage: TAGS\n#insurance #region-us \n"
]
|
baf20184f46001846f8d6be55e5b37ce28ad776f | # Dataset Card for "diffusiondb-prompt-upscale"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | adams-story/diffusiondb-prompt-upscale | [
"region:us"
]
| 2023-11-17T19:27:18+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "prompt_upscaled", "dtype": "string"}, {"name": "prompt_qa_raw_output", "dtype": "string"}, {"name": "prompt_qa_questions", "sequence": "string"}, {"name": "prompt_qa_answers", "sequence": "string"}, {"name": "prompt_qa_plan", "dtype": "string"}, {"name": "prompt_qa_options", "sequence": "string"}], "splits": [{"name": "train", "num_bytes": 91622094.93073891, "num_examples": 27372}, {"name": "test", "num_bytes": 927200.0692610944, "num_examples": 277}], "download_size": 47134525, "dataset_size": 92549295.0}} | 2023-11-17T19:27:53+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "diffusiondb-prompt-upscale"
More Information needed | [
"# Dataset Card for \"diffusiondb-prompt-upscale\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"diffusiondb-prompt-upscale\"\n\nMore Information needed"
]
| [
6,
21
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"diffusiondb-prompt-upscale\"\n\nMore Information needed"
]
|
f14f667a1cbb4bfbff5f9ac7070e84184f64ce0f | # Dataset Card for "833-icons-dataset-1024-blip-large"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kopyl/833-icons-dataset-1024-blip-large | [
"region:us"
]
| 2023-11-17T19:34:14+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 21063249.0, "num_examples": 833}], "download_size": 19766635, "dataset_size": 21063249.0}} | 2023-11-17T19:35:17+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "833-icons-dataset-1024-blip-large"
More Information needed | [
"# Dataset Card for \"833-icons-dataset-1024-blip-large\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"833-icons-dataset-1024-blip-large\"\n\nMore Information needed"
]
| [
6,
25
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"833-icons-dataset-1024-blip-large\"\n\nMore Information needed"
]
|
4c036db5c87274c4ac9c447633e462a00c4afd35 | # Dataset Card for "validation_2000_cutoff_llama-2-7b-tyellow-2k-cutoff-LR1-clean-train_2K_results_backup"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Rewcifer/validation_2000_cutoff_llama-2-7b-tyellow-2k-cutoff-LR1-clean-train_2K_results_backup | [
"region:us"
]
| 2023-11-17T20:52:34+00:00 | {"dataset_info": {"features": [{"name": "labels_and_findings", "dtype": "string"}, {"name": "prompts", "dtype": "string"}, {"name": "true_findings", "dtype": "string"}, {"name": "generated_texts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 17497952, "num_examples": 2000}], "download_size": 4270083, "dataset_size": 17497952}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-17T20:52:36+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "validation_2000_cutoff_llama-2-7b-tyellow-2k-cutoff-LR1-clean-train_2K_results_backup"
More Information needed | [
"# Dataset Card for \"validation_2000_cutoff_llama-2-7b-tyellow-2k-cutoff-LR1-clean-train_2K_results_backup\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"validation_2000_cutoff_llama-2-7b-tyellow-2k-cutoff-LR1-clean-train_2K_results_backup\"\n\nMore Information needed"
]
| [
6,
51
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"validation_2000_cutoff_llama-2-7b-tyellow-2k-cutoff-LR1-clean-train_2K_results_backup\"\n\nMore Information needed"
]
|
cfc53e9c147b05edd6468cd4fbda166849a5cbd4 | # Open LLM Leaderboard Results PR Opener's Dataset
🏆 This is the dataset of [Open LLM Leaderboard Results PR Opener](https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr).
📊 This dataset contains a CSV file, which in turn includes the discussion link for the related repository.
## Tool Link
🔗 [Weyaxi/open-llm-leaderboard-results-pr](https://huggingface.co/spaces/Weyaxi/open-llm-leaderboard-results-pr) | Weyaxi/open-llm-leaderboard-results-pr | [
"size_categories:1K<n<10K",
"region:us"
]
| 2023-11-17T21:19:56+00:00 | {"size_categories": ["1K<n<10K"]} | 2023-11-20T12:27:46+00:00 | []
| []
| TAGS
#size_categories-1K<n<10K #region-us
| # Open LLM Leaderboard Results PR Opener's Dataset
This is the dataset of Open LLM Leaderboard Results PR Opener.
This dataset contains a CSV file, which in turn includes the discussion link for the related repository.
## Tool Link
Weyaxi/open-llm-leaderboard-results-pr | [
"# Open LLM Leaderboard Results PR Opener's Dataset\n\n This is the dataset of Open LLM Leaderboard Results PR Opener. \n\n This dataset contains a CSV file, which in turn includes the discussion link for the related repository.",
"## Tool Link\n\n Weyaxi/open-llm-leaderboard-results-pr"
]
| [
"TAGS\n#size_categories-1K<n<10K #region-us \n",
"# Open LLM Leaderboard Results PR Opener's Dataset\n\n This is the dataset of Open LLM Leaderboard Results PR Opener. \n\n This dataset contains a CSV file, which in turn includes the discussion link for the related repository.",
"## Tool Link\n\n Weyaxi/open-llm-leaderboard-results-pr"
]
| [
18,
54,
21
]
| [
"passage: TAGS\n#size_categories-1K<n<10K #region-us \n# Open LLM Leaderboard Results PR Opener's Dataset\n\n This is the dataset of Open LLM Leaderboard Results PR Opener. \n\n This dataset contains a CSV file, which in turn includes the discussion link for the related repository.## Tool Link\n\n Weyaxi/open-llm-leaderboard-results-pr"
]
|
927268273025883b167deb4311740df8cf22c6d5 | # Dataset Card for "vicuna_qa_causal_LM_split"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | brettbbb/vicuna_qa_causal_LM_split | [
"region:us"
]
| 2023-11-17T21:37:14+00:00 | {"dataset_info": {"features": [{"name": "question", "dtype": "string"}, {"name": "mc1_targets", "struct": [{"name": "choices", "sequence": "string"}, {"name": "labels", "sequence": "int32"}]}, {"name": "mc2_targets", "struct": [{"name": "choices", "sequence": "string"}, {"name": "labels", "sequence": "int32"}]}], "splits": [{"name": "train", "num_bytes": 486818.29375764995, "num_examples": 653}, {"name": "test", "num_bytes": 122263.70624235006, "num_examples": 164}], "download_size": 280226, "dataset_size": 609082.0}} | 2023-11-17T21:42:48+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "vicuna_qa_causal_LM_split"
More Information needed | [
"# Dataset Card for \"vicuna_qa_causal_LM_split\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"vicuna_qa_causal_LM_split\"\n\nMore Information needed"
]
| [
6,
23
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"vicuna_qa_causal_LM_split\"\n\nMore Information needed"
]
|
3753ca69dc7b58495bf51be03286bc339cb5b2e3 | # Dataset Card for "chemnlp_iupac_smiles"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | kjappelbaum/chemnlp_iupac_smiles | [
"region:us"
]
| 2023-11-17T21:48:23+00:00 | {"dataset_info": {"features": [{"name": "SMILES", "dtype": "string"}, {"name": "Allowed", "dtype": "string"}, {"name": "CAS_like_Style", "dtype": "string"}, {"name": "Markup", "dtype": "string"}, {"name": "Preferred", "dtype": "string"}, {"name": "Systematic", "dtype": "string"}, {"name": "Traditional", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 22485225644, "num_examples": 34351330}], "download_size": 6770692514, "dataset_size": 22485225644}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-17T21:54:28+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "chemnlp_iupac_smiles"
More Information needed | [
"# Dataset Card for \"chemnlp_iupac_smiles\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"chemnlp_iupac_smiles\"\n\nMore Information needed"
]
| [
6,
20
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"chemnlp_iupac_smiles\"\n\nMore Information needed"
]
|
c4840dd5c3e856fd510dafac29299c1059999a42 | # Dataset Card for "DiffusionJockey"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | punwaiw/DiffusionJockey | [
"region:us"
]
| 2023-11-17T21:55:52+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13316574561.25, "num_examples": 17110}], "download_size": 13312875795, "dataset_size": 13316574561.25}} | 2023-12-04T01:15:04+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "DiffusionJockey"
More Information needed | [
"# Dataset Card for \"DiffusionJockey\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"DiffusionJockey\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"DiffusionJockey\"\n\nMore Information needed"
]
|
b56eb72898104e84da9515e15850adccaca7b260 | # Dataset Card for "bookcorpusopen_with_ids_chunked"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | simple-pretraining/bookcorpusopen_with_ids_chunked | [
"region:us"
]
| 2023-11-17T22:12:55+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "chunk_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 7288147697, "num_examples": 35859587}], "download_size": 4331524813, "dataset_size": 7288147697}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-17T22:15:26+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "bookcorpusopen_with_ids_chunked"
More Information needed | [
"# Dataset Card for \"bookcorpusopen_with_ids_chunked\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"bookcorpusopen_with_ids_chunked\"\n\nMore Information needed"
]
| [
6,
22
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"bookcorpusopen_with_ids_chunked\"\n\nMore Information needed"
]
|
f99664b345bcea9cda87685f5bc21f098264e694 |
# Dataset Card for Hogans Heroes TV Caps
<!-- Provide a quick summary of the dataset. -->
This dataset is generally caps from Hogans Heroes. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
These are curated screencaps of episodes from the 1960s tv show HOGANS HEROES.
Hand picked content from online sources, and capped using VLC's scene filter.
- **Curated by:** [https://capsekai.tumblr.com/]
## Uses
Research around text classification and preservation of old media.
### Direct Use
Study of character basis, research around the artistic nature of the episode's set design.
### Out-of-Scope Use
Going against local laws and regulations, onselling the dataset.
## Dataset Creation
### Curation Rationale
Preservation of old media.
### Source Data
Youtube & Dvd Sources
#### Data Collection and Processing
Collection: Unfiltered DVD / Youtube Caps.
#### Personal and Sensitive Information
There should be 0 personal info in here.
## Bias, Risks, and Limitations
????? - OH! Bias/Risks: Warning that this is a show that is largely based around World War 2. Like Dad's army this could contain sensitive topics and images.
The jokes set within the Reccomendations are just that we feel this TV show and the caps within are fairly safe, but it IS understandable if people largely have trigger issues with WW2.
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
Risks? Copyright, DMCA, blinding adoration towards Bob Crane? UWU KLINK?
Soical bias: WW2 media.
## Dataset Card Authors
[https://capsekai.tumblr.com/]
## Dataset Card Contact
[https://capsekai.tumblr.com/] | Capsekai/hogans-heroes | [
"task_categories:text-classification",
"size_categories:1K<n<10K",
"language:en",
"license:creativeml-openrail-m",
"art",
"region:us"
]
| 2023-11-17T22:14:57+00:00 | {"language": ["en"], "license": "creativeml-openrail-m", "size_categories": ["1K<n<10K"], "task_categories": ["text-classification"], "pretty_name": "Hogan's Heroes by Capsekai", "tags": ["art"]} | 2023-11-17T22:34:11+00:00 | []
| [
"en"
]
| TAGS
#task_categories-text-classification #size_categories-1K<n<10K #language-English #license-creativeml-openrail-m #art #region-us
|
# Dataset Card for Hogans Heroes TV Caps
This dataset is generally caps from Hogans Heroes. It has been generated using this raw template.
## Dataset Details
### Dataset Description
These are curated screencaps of episodes from the 1960s tv show HOGANS HEROES.
Hand picked content from online sources, and capped using VLC's scene filter.
- Curated by: [URL
## Uses
Research around text classification and preservation of old media.
### Direct Use
Study of character basis, research around the artistic nature of the episode's set design.
### Out-of-Scope Use
Going against local laws and regulations, onselling the dataset.
## Dataset Creation
### Curation Rationale
Preservation of old media.
### Source Data
Youtube & Dvd Sources
#### Data Collection and Processing
Collection: Unfiltered DVD / Youtube Caps.
#### Personal and Sensitive Information
There should be 0 personal info in here.
## Bias, Risks, and Limitations
????? - OH! Bias/Risks: Warning that this is a show that is largely based around World War 2. Like Dad's army this could contain sensitive topics and images.
The jokes set within the Reccomendations are just that we feel this TV show and the caps within are fairly safe, but it IS understandable if people largely have trigger issues with WW2.
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
Risks? Copyright, DMCA, blinding adoration towards Bob Crane? UWU KLINK?
Soical bias: WW2 media.
## Dataset Card Authors
[URL
## Dataset Card Contact
[URL | [
"# Dataset Card for Hogans Heroes TV Caps\n\n\n\nThis dataset is generally caps from Hogans Heroes. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\nThese are curated screencaps of episodes from the 1960s tv show HOGANS HEROES. \n\nHand picked content from online sources, and capped using VLC's scene filter.\n\n- Curated by: [URL",
"## Uses\n\nResearch around text classification and preservation of old media.",
"### Direct Use\n\nStudy of character basis, research around the artistic nature of the episode's set design.",
"### Out-of-Scope Use\n\nGoing against local laws and regulations, onselling the dataset.",
"## Dataset Creation",
"### Curation Rationale\n\nPreservation of old media.",
"### Source Data\n\nYoutube & Dvd Sources",
"#### Data Collection and Processing\n\nCollection: Unfiltered DVD / Youtube Caps.",
"#### Personal and Sensitive Information\n\nThere should be 0 personal info in here.",
"## Bias, Risks, and Limitations\n\n????? - OH! Bias/Risks: Warning that this is a show that is largely based around World War 2. Like Dad's army this could contain sensitive topics and images.\nThe jokes set within the Reccomendations are just that we feel this TV show and the caps within are fairly safe, but it IS understandable if people largely have trigger issues with WW2.",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\nRisks? Copyright, DMCA, blinding adoration towards Bob Crane? UWU KLINK? \n\nSoical bias: WW2 media.",
"## Dataset Card Authors \n\n[URL",
"## Dataset Card Contact\n\n[URL"
]
| [
"TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #language-English #license-creativeml-openrail-m #art #region-us \n",
"# Dataset Card for Hogans Heroes TV Caps\n\n\n\nThis dataset is generally caps from Hogans Heroes. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\nThese are curated screencaps of episodes from the 1960s tv show HOGANS HEROES. \n\nHand picked content from online sources, and capped using VLC's scene filter.\n\n- Curated by: [URL",
"## Uses\n\nResearch around text classification and preservation of old media.",
"### Direct Use\n\nStudy of character basis, research around the artistic nature of the episode's set design.",
"### Out-of-Scope Use\n\nGoing against local laws and regulations, onselling the dataset.",
"## Dataset Creation",
"### Curation Rationale\n\nPreservation of old media.",
"### Source Data\n\nYoutube & Dvd Sources",
"#### Data Collection and Processing\n\nCollection: Unfiltered DVD / Youtube Caps.",
"#### Personal and Sensitive Information\n\nThere should be 0 personal info in here.",
"## Bias, Risks, and Limitations\n\n????? - OH! Bias/Risks: Warning that this is a show that is largely based around World War 2. Like Dad's army this could contain sensitive topics and images.\nThe jokes set within the Reccomendations are just that we feel this TV show and the caps within are fairly safe, but it IS understandable if people largely have trigger issues with WW2.",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\nRisks? Copyright, DMCA, blinding adoration towards Bob Crane? UWU KLINK? \n\nSoical bias: WW2 media.",
"## Dataset Card Authors \n\n[URL",
"## Dataset Card Contact\n\n[URL"
]
| [
47,
33,
4,
53,
15,
22,
24,
5,
14,
10,
18,
17,
96,
68,
8,
7
]
| [
"passage: TAGS\n#task_categories-text-classification #size_categories-1K<n<10K #language-English #license-creativeml-openrail-m #art #region-us \n# Dataset Card for Hogans Heroes TV Caps\n\n\n\nThis dataset is generally caps from Hogans Heroes. It has been generated using this raw template.## Dataset Details### Dataset Description\n\nThese are curated screencaps of episodes from the 1960s tv show HOGANS HEROES. \n\nHand picked content from online sources, and capped using VLC's scene filter.\n\n- Curated by: [URL## Uses\n\nResearch around text classification and preservation of old media.### Direct Use\n\nStudy of character basis, research around the artistic nature of the episode's set design.### Out-of-Scope Use\n\nGoing against local laws and regulations, onselling the dataset.## Dataset Creation### Curation Rationale\n\nPreservation of old media.### Source Data\n\nYoutube & Dvd Sources#### Data Collection and Processing\n\nCollection: Unfiltered DVD / Youtube Caps.#### Personal and Sensitive Information\n\nThere should be 0 personal info in here.## Bias, Risks, and Limitations\n\n????? - OH! Bias/Risks: Warning that this is a show that is largely based around World War 2. Like Dad's army this could contain sensitive topics and images.\nThe jokes set within the Reccomendations are just that we feel this TV show and the caps within are fairly safe, but it IS understandable if people largely have trigger issues with WW2.### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\nRisks? Copyright, DMCA, blinding adoration towards Bob Crane? UWU KLINK? \n\nSoical bias: WW2 media.## Dataset Card Authors \n\n[URL## Dataset Card Contact\n\n[URL"
]
|
bdda095944c4af4d3fc0f2724ace2f18bfb7a4d1 |
This dataset comprises a collection of the most recent (up to 17 November 2023) 50K arXiv papers' metadata in the computer science categories:
- cs.AI (Artificial Intelligence),
- cs.CL (Computation and Language),
- cs.CV (Computer Vision),
- cs.LG (Machine Learning),
- cs.MA (Multiagent Systems), and
- cs.NE (Neural and Evolutionary Computing).
Each metadata entry includes the embeddings for the 'title' and 'summary' (abstract) of the paper, generated using [Cohere's Embed-v3](https://txt.cohere.com/introducing-embed-v3/). | dcarpintero/arXiv.cs.AI.CL.CV.LG.MA.NE.embedv3 | [
"task_categories:text-classification",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"region:us"
]
| 2023-11-17T22:17:27+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification", "question-answering"]} | 2023-11-22T23:20:09+00:00 | []
| [
"en"
]
| TAGS
#task_categories-text-classification #task_categories-question-answering #size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us
|
This dataset comprises a collection of the most recent (up to 17 November 2023) 50K arXiv papers' metadata in the computer science categories:
- cs.AI (Artificial Intelligence),
- cs.CL (Computation and Language),
- cs.CV (Computer Vision),
- cs.LG (Machine Learning),
- cs.MA (Multiagent Systems), and
- cs.NE (Neural and Evolutionary Computing).
Each metadata entry includes the embeddings for the 'title' and 'summary' (abstract) of the paper, generated using Cohere's Embed-v3. | []
| [
"TAGS\n#task_categories-text-classification #task_categories-question-answering #size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us \n"
]
| [
53
]
| [
"passage: TAGS\n#task_categories-text-classification #task_categories-question-answering #size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us \n"
]
|
b409d5b1d662a2d68ba6191378a0f4be02424b8c | # Dataset Card for "wikipedia_chunked"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | simple-pretraining/wikipedia_chunked | [
"region:us"
]
| 2023-11-17T22:35:06+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "chunk_id", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 21215289300, "num_examples": 72703460}], "download_size": 0, "dataset_size": 21215289300}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-18T00:25:58+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "wikipedia_chunked"
More Information needed | [
"# Dataset Card for \"wikipedia_chunked\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"wikipedia_chunked\"\n\nMore Information needed"
]
| [
6,
15
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"wikipedia_chunked\"\n\nMore Information needed"
]
|
b574852a37d0165f793ea7549e15ffb81e5d9145 |
This dataset comprises a collection of the most recent (up to 17 November 2023) 50K arXiv papers' metadata in the computer science category: 'cs.CL' (Computation and Language). Each metadata entry includes the embeddings for the 'title' and 'summary' (abstract) of the paper, generated using [Cohere's Embed-v3](https://txt.cohere.com/introducing-embed-v3/). | dcarpintero/arXiv.cs.CL.embedv3 | [
"task_categories:text-classification",
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"region:us"
]
| 2023-11-17T22:44:31+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-classification", "question-answering"]} | 2023-11-22T23:18:21+00:00 | []
| [
"en"
]
| TAGS
#task_categories-text-classification #task_categories-question-answering #size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us
|
This dataset comprises a collection of the most recent (up to 17 November 2023) 50K arXiv papers' metadata in the computer science category: 'cs.CL' (Computation and Language). Each metadata entry includes the embeddings for the 'title' and 'summary' (abstract) of the paper, generated using Cohere's Embed-v3. | []
| [
"TAGS\n#task_categories-text-classification #task_categories-question-answering #size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us \n"
]
| [
53
]
| [
"passage: TAGS\n#task_categories-text-classification #task_categories-question-answering #size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us \n"
]
|
0abe15fa975bffea3d1a499e7ee4e4a6bdc5e932 | ## Update
See the larger dataset: [en-nb-15k](https://huggingface.co/datasets/thevox/en-nb-15k)
## Methadology
Used GPT-3.5 with a translation prompt to give two versions of translation in Norwegian from English - normal and "more natural". Context is also generated.
## Data
Input sample format:
```
Oversett til Norsk:
{text}
```
Each output sample is formatted like this:
```
Kontekst: {a, b, c}
Oversettelse:
{translation}
Mer naturlig:
{improved_translation}
```
## Future work
Dataset will be used to train LLM-based translation models based on LLama 2 and similar to try to rival DeepL and ChatGPT machine translation.
## Dataset used
Original english text from: [nampdn-ai/tiny-textbooks](https://huggingface.co/datasets/nampdn-ai/tiny-textbooks)
### Author Contaxt
[email protected] | thevox/en-nb-10k | [
"task_categories:translation",
"size_categories:10K<n<100K",
"language:en",
"language:nb",
"language:no",
"license:mpl-2.0",
"region:us"
]
| 2023-11-17T22:59:26+00:00 | {"language": ["en", "nb", "no"], "license": "mpl-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["translation"], "pretty_name": "English-Norwegian Translation"} | 2023-11-19T13:11:20+00:00 | []
| [
"en",
"nb",
"no"
]
| TAGS
#task_categories-translation #size_categories-10K<n<100K #language-English #language-Norwegian Bokmål #language-Norwegian #license-mpl-2.0 #region-us
| ## Update
See the larger dataset: en-nb-15k
## Methadology
Used GPT-3.5 with a translation prompt to give two versions of translation in Norwegian from English - normal and "more natural". Context is also generated.
## Data
Input sample format:
Each output sample is formatted like this:
## Future work
Dataset will be used to train LLM-based translation models based on LLama 2 and similar to try to rival DeepL and ChatGPT machine translation.
## Dataset used
Original english text from: nampdn-ai/tiny-textbooks
### Author Contaxt
jonaslsa@URL | [
"## Update\nSee the larger dataset: en-nb-15k",
"## Methadology\nUsed GPT-3.5 with a translation prompt to give two versions of translation in Norwegian from English - normal and \"more natural\". Context is also generated.",
"## Data\nInput sample format:\n\nEach output sample is formatted like this:",
"## Future work\nDataset will be used to train LLM-based translation models based on LLama 2 and similar to try to rival DeepL and ChatGPT machine translation.",
"## Dataset used\nOriginal english text from: nampdn-ai/tiny-textbooks",
"### Author Contaxt\njonaslsa@URL"
]
| [
"TAGS\n#task_categories-translation #size_categories-10K<n<100K #language-English #language-Norwegian Bokmål #language-Norwegian #license-mpl-2.0 #region-us \n",
"## Update\nSee the larger dataset: en-nb-15k",
"## Methadology\nUsed GPT-3.5 with a translation prompt to give two versions of translation in Norwegian from English - normal and \"more natural\". Context is also generated.",
"## Data\nInput sample format:\n\nEach output sample is formatted like this:",
"## Future work\nDataset will be used to train LLM-based translation models based on LLama 2 and similar to try to rival DeepL and ChatGPT machine translation.",
"## Dataset used\nOriginal english text from: nampdn-ai/tiny-textbooks",
"### Author Contaxt\njonaslsa@URL"
]
| [
53,
14,
39,
16,
36,
20,
10
]
| [
"passage: TAGS\n#task_categories-translation #size_categories-10K<n<100K #language-English #language-Norwegian Bokmål #language-Norwegian #license-mpl-2.0 #region-us \n## Update\nSee the larger dataset: en-nb-15k## Methadology\nUsed GPT-3.5 with a translation prompt to give two versions of translation in Norwegian from English - normal and \"more natural\". Context is also generated.## Data\nInput sample format:\n\nEach output sample is formatted like this:## Future work\nDataset will be used to train LLM-based translation models based on LLama 2 and similar to try to rival DeepL and ChatGPT machine translation.## Dataset used\nOriginal english text from: nampdn-ai/tiny-textbooks### Author Contaxt\njonaslsa@URL"
]
|
d99881e47bcdb85c057d69040b3ec73f509d4de5 | # Dataset Card for "genta-mix"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | genta-tech/rlhf-id-30 | [
"region:us"
]
| 2023-11-17T23:29:34+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "prompt", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "response", "dtype": "string"}, {"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 206472478.41576737, "num_examples": 112052}, {"name": "test", "num_bytes": 22926526.970655464, "num_examples": 12451}], "download_size": 135482076, "dataset_size": 229399005.38642284}} | 2023-11-17T23:30:51+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "genta-mix"
More Information needed | [
"# Dataset Card for \"genta-mix\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"genta-mix\"\n\nMore Information needed"
]
| [
6,
14
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"genta-mix\"\n\nMore Information needed"
]
|
d4c3cdb1f9c6b21445dd2fbbffd1a9310c4b4196 | This dataset is the result of combing through several reverse proxy logs sets and cleaning them of refusals, duplicate, incomplete, and poor quality responses. Lots of manual quality checks. There's also things like ecommerce descriptions for sex toys and bondage gear, as well as examples of SEO optimized porn video descriptions. I will definitely be improving on this dataset continously; it should be considered a work in progress. My goal is to create a model (or set of models) which can completely replace OpenAI models for erotic roleplay and adult industry use.
Please consider supporting me on Patreon, I'm only asking for about tree fiddy.
https://www.patreon.com/openerotica
I will continue to refine this dataset and release new versions eventually.
Updates
12/2/2023: Removed some BOS and EOS tags were missed by filtering. | openerotica/freedom-rp | [
"license:apache-2.0",
"not-for-all-audiences",
"region:us"
]
| 2023-11-18T01:14:06+00:00 | {"license": "apache-2.0", "tags": ["not-for-all-audiences"]} | 2024-02-05T18:46:47+00:00 | []
| []
| TAGS
#license-apache-2.0 #not-for-all-audiences #region-us
| This dataset is the result of combing through several reverse proxy logs sets and cleaning them of refusals, duplicate, incomplete, and poor quality responses. Lots of manual quality checks. There's also things like ecommerce descriptions for sex toys and bondage gear, as well as examples of SEO optimized porn video descriptions. I will definitely be improving on this dataset continously; it should be considered a work in progress. My goal is to create a model (or set of models) which can completely replace OpenAI models for erotic roleplay and adult industry use.
Please consider supporting me on Patreon, I'm only asking for about tree fiddy.
URL
I will continue to refine this dataset and release new versions eventually.
Updates
12/2/2023: Removed some BOS and EOS tags were missed by filtering. | []
| [
"TAGS\n#license-apache-2.0 #not-for-all-audiences #region-us \n"
]
| [
23
]
| [
"passage: TAGS\n#license-apache-2.0 #not-for-all-audiences #region-us \n"
]
|
13957e4156b5b405723962efdf1cb9b275699b04 | # Dataset Card for "data_deduplicated_part01"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | phanvancongthanh/data_deduplicated_part01 | [
"region:us"
]
| 2023-11-18T02:10:14+00:00 | {"dataset_info": {"features": [{"name": "smiles", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3411491347, "num_examples": 88852759}], "download_size": 1829776174, "dataset_size": 3411491347}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-18T02:11:48+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "data_deduplicated_part01"
More Information needed | [
"# Dataset Card for \"data_deduplicated_part01\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"data_deduplicated_part01\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"data_deduplicated_part01\"\n\nMore Information needed"
]
|
f0cc66f3b20b8b791a8366bb1246113d90e7d53d | # Dataset Card for "zlbn_en1"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | laitrongduc/zlbn_en1 | [
"region:us"
]
| 2023-11-18T02:33:12+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 86586194.13, "num_examples": 1362}], "download_size": 83981296, "dataset_size": 86586194.13}} | 2023-11-18T02:33:44+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "zlbn_en1"
More Information needed | [
"# Dataset Card for \"zlbn_en1\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"zlbn_en1\"\n\nMore Information needed"
]
| [
6,
16
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"zlbn_en1\"\n\nMore Information needed"
]
|
683092170ac8960b34542c0c2b6ba6107778c1c6 | # Dataset Card for "data_deduplicated_part03"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | phanvancongthanh/data_deduplicated_part03 | [
"region:us"
]
| 2023-11-18T04:27:14+00:00 | {"dataset_info": {"features": [{"name": "smiles", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4880110137, "num_examples": 109217155}], "download_size": 2468882830, "dataset_size": 4880110137}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-18T04:29:20+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "data_deduplicated_part03"
More Information needed | [
"# Dataset Card for \"data_deduplicated_part03\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"data_deduplicated_part03\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"data_deduplicated_part03\"\n\nMore Information needed"
]
|
36439ea6d4f09d0acb52b90ffef1f057b6f77cec | # Dataset Card for "autotrain-data-r2or-l87f-q4gb"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | josiauhlol/autotrain-data-r2or-l87f-q4gb | [
"region:us"
]
| 2023-11-18T04:38:28+00:00 | {"dataset_info": {"features": [{"name": "output", "dtype": "string"}, {"name": "autotrain_text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 757, "num_examples": 14}, {"name": "validation", "num_bytes": 757, "num_examples": 14}], "download_size": 4704, "dataset_size": 1514}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}]} | 2023-11-18T04:38:29+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "autotrain-data-r2or-l87f-q4gb"
More Information needed | [
"# Dataset Card for \"autotrain-data-r2or-l87f-q4gb\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"autotrain-data-r2or-l87f-q4gb\"\n\nMore Information needed"
]
| [
6,
27
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"autotrain-data-r2or-l87f-q4gb\"\n\nMore Information needed"
]
|
2dd62cff6b73d7db69f0bbc0a8d84df1a063df1f | # fsGPT | josiauhlol/fsGPT | [
"task_categories:conversational",
"language:en",
"license:openrail",
"ai",
"region:us"
]
| 2023-11-18T04:56:28+00:00 | {"language": "en", "license": "openrail", "task_categories": ["conversational"], "pretty_name": "freesmartGPT", "tags": ["ai"]} | 2023-11-18T05:13:05+00:00 | []
| [
"en"
]
| TAGS
#task_categories-conversational #language-English #license-openrail #ai #region-us
| # fsGPT | [
"# fsGPT"
]
| [
"TAGS\n#task_categories-conversational #language-English #license-openrail #ai #region-us \n",
"# fsGPT"
]
| [
28,
5
]
| [
"passage: TAGS\n#task_categories-conversational #language-English #license-openrail #ai #region-us \n# fsGPT"
]
|
ee6320c40c8d71a0070eef5278bd375b12ed3534 | Why would I spend all that time creating these datasets and training just to brain damage the models with wikitext during quantization? This dataset is primarily multi turn ERP chat. It's formatted to be a drop in replacement for wikitext for quantization methods such as AutoGPTQ or AWQ. | openerotica/erotiquant | [
"license:apache-2.0",
"region:us"
]
| 2023-11-18T05:21:21+00:00 | {"license": "apache-2.0"} | 2024-01-02T11:50:36+00:00 | []
| []
| TAGS
#license-apache-2.0 #region-us
| Why would I spend all that time creating these datasets and training just to brain damage the models with wikitext during quantization? This dataset is primarily multi turn ERP chat. It's formatted to be a drop in replacement for wikitext for quantization methods such as AutoGPTQ or AWQ. | []
| [
"TAGS\n#license-apache-2.0 #region-us \n"
]
| [
14
]
| [
"passage: TAGS\n#license-apache-2.0 #region-us \n"
]
|
b224fa5cacb5cd06b8779f1e16d229567fc2ff21 | * 从[sharegpt-38k](https://huggingface.co/datasets/shibing624/sharegpt_gpt4)和[sharegpt-90k](RyokoAI/ShareGPT52K)数据集中筛选的长对话,长度大于8k字(英文大于8k个word,中文大于8k个汉字)
* 已经转化为chatml对话格式 | yuyijiong/Sharegpt-long-conversation | [
"language:zh",
"language:en",
"license:cc-by-nc-4.0",
"region:us"
]
| 2023-11-18T06:09:35+00:00 | {"language": ["zh", "en"], "license": "cc-by-nc-4.0"} | 2023-11-18T06:15:26+00:00 | []
| [
"zh",
"en"
]
| TAGS
#language-Chinese #language-English #license-cc-by-nc-4.0 #region-us
| * 从sharegpt-38k和sharegpt-90k数据集中筛选的长对话,长度大于8k字(英文大于8k个word,中文大于8k个汉字)
* 已经转化为chatml对话格式 | []
| [
"TAGS\n#language-Chinese #language-English #license-cc-by-nc-4.0 #region-us \n"
]
| [
26
]
| [
"passage: TAGS\n#language-Chinese #language-English #license-cc-by-nc-4.0 #region-us \n"
]
|
05f43c33adbfa1d58ae0148dc5b9f909b9dd0012 | # Dataset Card for "imdb-card-pred-scientific"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | vic0428/imdb-card-pred-scientific | [
"region:us"
]
| 2023-11-18T06:13:21+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "true_cardinality", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 39344995.2, "num_examples": 80000}, {"name": "test", "num_bytes": 9836248.8, "num_examples": 20000}], "download_size": 8634654, "dataset_size": 49181244.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]} | 2023-11-18T06:17:42+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "imdb-card-pred-scientific"
More Information needed | [
"# Dataset Card for \"imdb-card-pred-scientific\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"imdb-card-pred-scientific\"\n\nMore Information needed"
]
| [
6,
20
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"imdb-card-pred-scientific\"\n\nMore Information needed"
]
|
9b26b47f74f26ad8c76d4ebc292069424b61daa8 | i am tim dettmers no 1 fan | ludis/foundational_rp | [
"region:us"
]
| 2023-11-18T06:58:47+00:00 | {} | 2023-11-20T22:16:07+00:00 | []
| []
| TAGS
#region-us
| i am tim dettmers no 1 fan | []
| [
"TAGS\n#region-us \n"
]
| [
6
]
| [
"passage: TAGS\n#region-us \n"
]
|
aa092625fee1f858db7dde3e253ff5825543889f | # Dataset Card for "github-issues"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ruoruoliu/github-issues | [
"language:en",
"code",
"region:us"
]
| 2023-11-18T07:07:31+00:00 | {"language": ["en"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "url", "dtype": "string"}, {"name": "repository_url", "dtype": "string"}, {"name": "labels_url", "dtype": "string"}, {"name": "comments_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "number", "dtype": "int64"}, {"name": "title", "dtype": "string"}, {"name": "user", "struct": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "labels", "list": [{"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "name", "dtype": "string"}, {"name": "color", "dtype": "string"}, {"name": "default", "dtype": "bool"}, {"name": "description", "dtype": "string"}]}, {"name": "state", "dtype": "string"}, {"name": "locked", "dtype": "bool"}, {"name": "assignee", "struct": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "float64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "assignees", "list": [{"name": "login", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "node_id", "dtype": "string"}, {"name": "avatar_url", "dtype": "string"}, {"name": "gravatar_id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "html_url", "dtype": "string"}, {"name": "followers_url", "dtype": "string"}, {"name": "following_url", "dtype": "string"}, {"name": "gists_url", "dtype": "string"}, {"name": "starred_url", "dtype": "string"}, {"name": "subscriptions_url", "dtype": "string"}, {"name": "organizations_url", "dtype": "string"}, {"name": "repos_url", "dtype": "string"}, {"name": "events_url", "dtype": "string"}, {"name": "received_events_url", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "site_admin", "dtype": "bool"}]}, {"name": "milestone", "dtype": "null"}, {"name": "comments", "sequence": "string"}, {"name": "created_at", "dtype": "int64"}, {"name": "updated_at", "dtype": "int64"}, {"name": "closed_at", "dtype": "int64"}, {"name": "author_association", "dtype": "string"}, {"name": "active_lock_reason", "dtype": "null"}, {"name": "draft", "dtype": "null"}, {"name": "pull_request", "dtype": "null"}, {"name": "body", "dtype": "string"}, {"name": "reactions", "struct": [{"name": "url", "dtype": "string"}, {"name": "total_count", "dtype": "int64"}, {"name": "+1", "dtype": "int64"}, {"name": "-1", "dtype": "int64"}, {"name": "laugh", "dtype": "int64"}, {"name": "hooray", "dtype": "int64"}, {"name": "confused", "dtype": "int64"}, {"name": "heart", "dtype": "int64"}, {"name": "rocket", "dtype": "int64"}, {"name": "eyes", "dtype": "int64"}]}, {"name": "timeline_url", "dtype": "string"}, {"name": "performed_via_github_app", "dtype": "null"}, {"name": "state_reason", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 299481, "num_examples": 58}], "download_size": 174782, "dataset_size": 299481}, "tags": ["code"]} | 2023-11-18T07:10:26+00:00 | []
| [
"en"
]
| TAGS
#language-English #code #region-us
| # Dataset Card for "github-issues"
More Information needed | [
"# Dataset Card for \"github-issues\"\n\nMore Information needed"
]
| [
"TAGS\n#language-English #code #region-us \n",
"# Dataset Card for \"github-issues\"\n\nMore Information needed"
]
| [
12,
15
]
| [
"passage: TAGS\n#language-English #code #region-us \n# Dataset Card for \"github-issues\"\n\nMore Information needed"
]
|
fdb8980dfad453a989fa68be1e0b5cb689016214 | **Description**: Snapshot measurements on 27 variables from a distillation column; measured over 2.5 years.
**Data source**: From an industrial source; variable names have been coded. e.g. Temp1 is a temperature, but we cannot disclose where it is measured on the column.
Temperatures are in Fahrenheit
Pressures are measured in bars
FlowC1 in units of MSCFD
FlowC3 and FlowC4 are in units of MBPD
Temp11 = Temp3 - Temp9 = the temperature increase of the stream leaving the column and returning back, after passing through the reboiler
InvTemp tags are 1000/temperature, therefore with units of inverted Fahrenheit
The last column, VapourPressure, is a quality variable measured in the laboratory. The other snapshot measurements in the row correspond to the process conditions when the sample was taken.
**Data shape** 253 rows and 27 columns | talaviyabhavik/distillation | [
"license:apache-2.0",
"region:us"
]
| 2023-11-18T07:32:49+00:00 | {"license": "apache-2.0"} | 2023-11-18T07:38:21+00:00 | []
| []
| TAGS
#license-apache-2.0 #region-us
| Description: Snapshot measurements on 27 variables from a distillation column; measured over 2.5 years.
Data source: From an industrial source; variable names have been coded. e.g. Temp1 is a temperature, but we cannot disclose where it is measured on the column.
Temperatures are in Fahrenheit
Pressures are measured in bars
FlowC1 in units of MSCFD
FlowC3 and FlowC4 are in units of MBPD
Temp11 = Temp3 - Temp9 = the temperature increase of the stream leaving the column and returning back, after passing through the reboiler
InvTemp tags are 1000/temperature, therefore with units of inverted Fahrenheit
The last column, VapourPressure, is a quality variable measured in the laboratory. The other snapshot measurements in the row correspond to the process conditions when the sample was taken.
Data shape 253 rows and 27 columns | []
| [
"TAGS\n#license-apache-2.0 #region-us \n"
]
| [
14
]
| [
"passage: TAGS\n#license-apache-2.0 #region-us \n"
]
|
5a25a64da635c8faac1501ba91fc6b38e1d8e92a | I realize many people downloaded this dataset.
Plese note that the "validation" subset cannot be used for validating, and the trianing dataset is also not for training.
Have a good day! | liangzid/glue_prompts | [
"region:us"
]
| 2023-11-18T07:34:35+00:00 | {} | 2023-12-16T06:29:51+00:00 | []
| []
| TAGS
#region-us
| I realize many people downloaded this dataset.
Plese note that the "validation" subset cannot be used for validating, and the trianing dataset is also not for training.
Have a good day! | []
| [
"TAGS\n#region-us \n"
]
| [
6
]
| [
"passage: TAGS\n#region-us \n"
]
|
084980a58cee42476af108a83a97fcb05b5ad24f | Single Danbooru-Tag text input, intended for recalibration of Cross-Attention.
Word frequency cutoff: tags with at least 1000 posts.
```python
safe_file_name = re.sub(r'[^\w\-_\. ]', '_', prompt)
file_name = f"{seed_folder}/{safe_file_name}_{seed}.png"
``` | JosephusCheung/naidiffusionv3distil | [
"region:us"
]
| 2023-11-18T08:00:50+00:00 | {} | 2023-11-18T17:39:56+00:00 | []
| []
| TAGS
#region-us
| Single Danbooru-Tag text input, intended for recalibration of Cross-Attention.
Word frequency cutoff: tags with at least 1000 posts.
| []
| [
"TAGS\n#region-us \n"
]
| [
6
]
| [
"passage: TAGS\n#region-us \n"
]
|
60dcc0d54e7615d257a76119f532c0e2ea44665e | # Dataset Card for "hackernews_title_upvote"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arpanetus/hackernews_title_upvote | [
"region:us"
]
| 2023-11-18T09:04:14+00:00 | {"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "completion", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13840984, "num_examples": 15064}], "download_size": 8346861, "dataset_size": 13840984}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-18T13:45:17+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "hackernews_title_upvote"
More Information needed | [
"# Dataset Card for \"hackernews_title_upvote\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"hackernews_title_upvote\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"hackernews_title_upvote\"\n\nMore Information needed"
]
|
ab5da710bae2be5c950ff0e4dff3fc6828881f68 | # Dataset Card for "turkishReviews-ds-mini"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | AyhanCagan/turkishReviews-ds-mini | [
"region:us"
]
| 2023-11-18T09:18:02+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "review", "dtype": "string"}, {"name": "review_length", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1252876.2642514652, "num_examples": 3378}, {"name": "validation", "num_bytes": 139455.7357485349, "num_examples": 376}], "download_size": 896651, "dataset_size": 1392332.0}} | 2023-11-29T10:42:22+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "turkishReviews-ds-mini"
More Information needed | [
"# Dataset Card for \"turkishReviews-ds-mini\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"turkishReviews-ds-mini\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"turkishReviews-ds-mini\"\n\nMore Information needed"
]
|
ea5ee59cf1d210b7a1f9fa7ca58a8017e9ed3b65 | # Dataset Card for "data_deduplicated_part04"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | phanvancongthanh/data_deduplicated_part04 | [
"region:us"
]
| 2023-11-18T09:49:13+00:00 | {"dataset_info": {"features": [{"name": "smiles", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4854481434, "num_examples": 103054258}], "download_size": 2391891371, "dataset_size": 4854481434}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-18T09:51:15+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "data_deduplicated_part04"
More Information needed | [
"# Dataset Card for \"data_deduplicated_part04\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"data_deduplicated_part04\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"data_deduplicated_part04\"\n\nMore Information needed"
]
|
fe7518d9ef714e5136f9eaadb8052a8133afdfa2 | # Dataset Card for "hackernews_title_upvote_0"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arpanetus/hackernews_title_upvote_0 | [
"region:us"
]
| 2023-11-18T09:52:12+00:00 | {"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "completion", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13840984, "num_examples": 15064}], "download_size": 8346861, "dataset_size": 13840984}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-18T13:45:05+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "hackernews_title_upvote_0"
More Information needed | [
"# Dataset Card for \"hackernews_title_upvote_0\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"hackernews_title_upvote_0\"\n\nMore Information needed"
]
| [
6,
21
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"hackernews_title_upvote_0\"\n\nMore Information needed"
]
|
53461857cdf2ba7ccf8033a6ba6cdb7285d8e0cd | [https://github.com/liaopeiyuan/artbench](ArtBench) samples encoded to float16 SDXL latents via [Ollin VAE](https://huggingface.co/madebyollin/sdxl-vae-fp16-fix).
Dataset created using [this script](https://github.com/Birch-san/sdxl-diffusion-decoder/blob/main/script/make_sdxl_latent_dataset.py).
Didn't bother saving mean & logvar, because variance is low enough it's not worth the doubling of filesize to retain.
Sampled from diagonal gaussian distribution, saved the resulting latents.
Also kept the original image.
Schema/usage:
```python
from typing import TypedDict, Iterator
from webdataset import WebDataset
Sample = TypedDict('Sample', {
'__key__': str,
'__url__': str,
'cls.txt': bytes, # UTF-8 encoded class id from 0 to 9 inclusive
'img.png': bytes, # PIL image, serialized. 256*256px
'latent.pth': bytes, # FloatTensor, serialized. 32*32 latents
})
it: Iterator[Sample] = WebDataset('train/{00000..00004}.tar')
for sample in it:
pass
```
The data sources of ArtBench-10 is released under a Fair Use license, as requested by WikiArt, Ukiyo-e.org database and The Surrealism Website.
For more information, see https://www.wikiart.org/en/terms-of-use, https://ukiyo-e.org/about and https://surrealism.website/
train: 50000 samples
test: 10000 samples
```python
# test/avg/val.pt (mean):
[-0.11362826824188232, -0.7059057950973511, 0.4819808006286621, 2.2327630519866943]
# test/avg/sq.pt:
[52.59075927734375, 30.115631103515625, 44.977020263671875, 30.228885650634766]
# std
# (sq - val**2)**.5
[7.251058578491211, 5.442180633544922, 6.689148902893066, 5.024306297302246]
# 1/std
[0.1379109025001526, 0.18374986946582794, 0.14949584007263184, 0.19903245568275452]
# train/avg/val.pt (mean):
[-0.1536690890789032, -0.7142514586448669, 0.4706766605377197, 2.24863600730896]
# train/avg/sq.pt:
[51.99677276611328, 30.184646606445312, 44.909732818603516, 30.234216690063477]
# std
# (sq - val**2)**.5
[7.2092413902282715, 5.447429656982422, 6.68492317199707, 5.017753601074219]
# 1/std
[0.1387108564376831, 0.18357281386852264, 0.14959034323692322, 0.1992923617362976]
``` | Birchlabs/sdxl-latents-artbench | [
"region:us"
]
| 2023-11-18T10:30:38+00:00 | {} | 2023-11-22T22:39:15+00:00 | []
| []
| TAGS
#region-us
| URL samples encoded to float16 SDXL latents via Ollin VAE.
Dataset created using this script.
Didn't bother saving mean & logvar, because variance is low enough it's not worth the doubling of filesize to retain.
Sampled from diagonal gaussian distribution, saved the resulting latents.
Also kept the original image.
Schema/usage:
The data sources of ArtBench-10 is released under a Fair Use license, as requested by WikiArt, URL database and The Surrealism Website.
For more information, see URL URL and https://surrealism.website/
train: 50000 samples
test: 10000 samples
| []
| [
"TAGS\n#region-us \n"
]
| [
6
]
| [
"passage: TAGS\n#region-us \n"
]
|
b53a24f8edf5eb33d033a53b5b53d0a4a220d4ae | <!-- To update the above `dataset_info` section, please run the following command: `datasets-cli test open_australian_legal_qa.py --save_info --all_configs`. -->
# **Open Australian Legal QA ⚖️**
<a href="https://huggingface.co/datasets/umarbutler/open-australian-legal-qa" alt="Release"><img src="https://img.shields.io/badge/release-v2.0.0-green"></a>
Open Australian Legal QA is the first open dataset of Australian legal questions and answers.
Comprised of 2,124 questions and answers synthesised by `gpt-4` from the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus), the largest open database of Australian law, the dataset is intended to facilitate the development of legal AI assistants in Australia.
To ensure its accessibility to as wide an audience as possible, the dataset is distributed under the same licence as the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus/blob/main/LICENCE.md).
## Usage 👩💻
The below code snippet illustrates how the dataset may be loaded with the [Hugging Face Datasets](https://huggingface.co/docs/datasets/index) Python library:
```python
from datasets import load_dataset
corpus = load_dataset('umarbutler/open_australian_legal_qa', split='train')
```
To speed up the loading of the dataset, you may wish to install [`orjson`](https://github.com/ijl/orjson).
## Structure 🗂️
The dataset is stored in [qa.jsonl](https://huggingface.co/datasets/umarbutler/open-australian-legal-qa/blob/main/qa.jsonl), a json lines file where each line represents a question-answer pair consisting of four keys:
| Key | Description |
| --- | --- |
| question | The text of the question. |
| answer | The text of the answer to the question. |
| text | The text of the question and answer in the format `Question: {question}\nAnswer: {answer}`. |
| prompt | The text of the prompt used to generate the question-answer pair. |
| source | A dictionary representing the document from which the question-answer pair was synthesised, sharing the same keys as documents in the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus), with the `text` field constituting the text of the chunk used to generate the pair. |
## Methodology 🧪
2,124 documents from the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus) were randomly sampled, barring bills and documents consisting entirely of whitespace. These documents were then split into semantically meaningful chunks up to 384-tokens-long (as determined by [`tiktoken`](https://github.com/openai/tiktoken)'s tokeniser for `gpt-4`) with the [`semchunk`](https://github.com/umarbutler/semchunk) Python library.
Chunks that consisted entirely of whitespace, contained 6 or more consecutive periods, ignoring whitespace (indicating that they contained a table of contents) or that were less than 96-tokens-long were discarded. A single chunk was randomly selected from each document (for those documents with a chunk to select) and subsequently cleaned of consecutive newlines, consecutive whitespace and lines consisting entirely of whitespace.
These chunks were then embedded into the following prompt, with the names of jurisdictions and types being capitalised and stripped of hyphens:
```xml
# Snippet
The snippet from an Australian legal document from which you must synthesise a question and answer is provided below.
<document_metadata>
<document_title><!-- insert citation here --></document_title>
<document_jurisdiction><!-- insert jurisdiction here --></document_jurisdiction>
<document_type><!-- insert type here --></document_type>
</document_metadata>
<snippet>
<!-- insert text here -->
</snippet>
# Format
You must format your response as follows:
<format>
# Question
{A question related to the snippet, or a topic discussed therein.}
# Answer
{The answer to the question, extracted from the snippet.}
</format>
# Instructions
You must act as a question-and-answer synthesiser that takes a snippet from an Australian legal document and synthesises a question related to the snippet, or a topic discussed therein, and an answer to that question, extracted from the snippet.
Your question must be decontextualised and standalone from the snippet. If the question pertains to a particular jurisdiction or document, it must state that explicitly (eg, 'In Victoria, is it lawful for ...?', 'What did the Court decide in Mabo v Queensland (No 2) [1992] HCA 23?', etc...).
Your answer must also be decontextualised and standalone from the snippet. It must reference the document from which it came (eg, 'Under the Crimes Act 1958 (Vic), ...', 'In Mabo v Queensland (No 2) [1992] HCA 23, the Court decided ...', etc...), not the snippet itself. It must be capable of being understood on its own and without reference to the snippet or its source document.
When referring to a document (eg, the Crimes Act) or a part thereof (eg, Paragraph 1), or to a person (eg, the Minister), organisation (eg, the Department) or concept (eg, the rule of law), you must refer to it by its full name (eg, the Crimes Act 1958 (Vic) instead of the Crimes Act, Paragraph 1 of ABC v XYZ instead of Paragraph 1, the Commonwealth Minister for Finance instead of the Minister).
If it is not possible to synthesise a question and answer from the snippet, you must respond with `<!no_qa!>`. Otherwise, your response must conform to the provided format.
```
The resulting prompts were then sent to `gpt-4` with the following hyperparameters:
| Hyperparameter | Value |
| --- | --- |
| `temperature` | 0 |
| `top_p` | 1 |
| `frequency_penalty` | 0 |
| `presence_penalty` | 0 |
| `max_tokens` | 768 |
`gpt-4`'s responses were parsed with the regex pattern `#\s?Question:?\s+((?:\n|.)+)#\s?Answer:?\s+((?:\n|.)+)`, yielding the question-answer pairs. Any malformed responses were discarded.
## Changelog 🔄
All notable changes to the dataset are documented in its [Changelog 🔄](https://huggingface.co/datasets/umarbutler/open-australian-legal-qa/blob/main/CHANGELOG.md).
This project adheres to [Keep a Changelog](https://keepachangelog.com/en/1.0.0/) and [Semantic Versioning](https://semver.org/spec/v2.0.0.html).
## Licence 📜
The dataset is distributed under the same licence as the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus/blob/main/LICENCE.md).
## Citation 🔖
If you've relied on the dataset for your work, please cite:
```latex
@misc{butler-2023-open-australian-legal-dataset,
author = {Butler, Umar},
year = {2023},
title = {Open Australian Legal QA},
publisher = {Hugging Face},
version = {2.0.0},
doi = {10.57967/hf/1479},
url = {https://huggingface.co/datasets/umarbutler/open-australian-legal-qa}
}
```
## Acknowledgements 🙏
In the spirit of reconciliation, the author acknowledges the Traditional Custodians of Country throughout Australia and their connections to land, sea and community. He pays his respect to their Elders past and present and extends that respect to all Aboriginal and Torres Strait Islander peoples today.
The author thanks Matthew Altenberg, who gave him the idea of using `gpt-4` to synthesise questions and answers from the [Open Australian Legal Corpus](https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus).
The author also acknowledges the creators of the many Python libraries relied upon in the creation of the dataset.
Finally, the author is eternally grateful for the endless support of his wife and her willingness to put up with many a late night spent writing code and quashing bugs. | umarbutler/open-australian-legal-qa | [
"task_categories:question-answering",
"task_categories:text-generation",
"task_categories:text2text-generation",
"task_ids:closed-domain-qa",
"annotations_creators:machine-generated",
"language_creators:machine-generated",
"size_categories:1K<n<10K",
"source_datasets:umarbutler/open-australian-legal-corpus",
"language:en",
"license:other",
"law",
"legal",
"australia",
"question-answering",
"qa",
"question-answer",
"text-generation",
"llm",
"chatbot",
"conversational-ai",
"generative-ai",
"natural-language-understanding",
"fine-tuning",
"doi:10.57967/hf/1479",
"region:us"
]
| 2023-11-18T10:35:19+00:00 | {"annotations_creators": ["machine-generated"], "language_creators": ["machine-generated"], "language": ["en"], "license": "other", "size_categories": ["1K<n<10K"], "source_datasets": ["umarbutler/open-australian-legal-corpus"], "task_categories": ["question-answering", "text-generation", "text2text-generation"], "task_ids": ["closed-domain-qa"], "pretty_name": "Open Australian Legal QA", "license_name": "open-australian-legal-corpus", "license_link": "https://huggingface.co/datasets/umarbutler/open-australian-legal-corpus/blob/main/LICENCE.md", "tags": ["law", "legal", "australia", "question-answering", "qa", "question-answer", "text-generation", "llm", "chatbot", "conversational-ai", "generative-ai", "natural-language-understanding", "fine-tuning"], "language_details": "en-AU, en-GB", "viewer": true, "dataset_info": {"config_name": "train", "features": [{"name": "question", "dtype": "string"}, {"name": "answer", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "prompt", "dtype": "string"}, {"name": "source", "struct": [{"name": "version_id", "dtype": "string"}, {"name": "type", "dtype": "string"}, {"name": "jurisdiction", "dtype": "string"}, {"name": "source", "dtype": "string"}, {"name": "citation", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "text", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 13243775, "num_examples": 2124}], "download_size": 13538191, "dataset_size": 13243775}} | 2023-12-14T06:58:56+00:00 | []
| [
"en"
]
| TAGS
#task_categories-question-answering #task_categories-text-generation #task_categories-text2text-generation #task_ids-closed-domain-qa #annotations_creators-machine-generated #language_creators-machine-generated #size_categories-1K<n<10K #source_datasets-umarbutler/open-australian-legal-corpus #language-English #license-other #law #legal #australia #question-answering #qa #question-answer #text-generation #llm #chatbot #conversational-ai #generative-ai #natural-language-understanding #fine-tuning #doi-10.57967/hf/1479 #region-us
| Open Australian Legal QA ️
===========================
<a href="URL alt="Release"><img src="URL
Open Australian Legal QA is the first open dataset of Australian legal questions and answers.
Comprised of 2,124 questions and answers synthesised by 'gpt-4' from the Open Australian Legal Corpus, the largest open database of Australian law, the dataset is intended to facilitate the development of legal AI assistants in Australia.
To ensure its accessibility to as wide an audience as possible, the dataset is distributed under the same licence as the Open Australian Legal Corpus.
Usage
-------
The below code snippet illustrates how the dataset may be loaded with the Hugging Face Datasets Python library:
To speed up the loading of the dataset, you may wish to install 'orjson'.
Structure ️
-----------
The dataset is stored in URL, a json lines file where each line represents a question-answer pair consisting of four keys:
Methodology
-----------
2,124 documents from the Open Australian Legal Corpus were randomly sampled, barring bills and documents consisting entirely of whitespace. These documents were then split into semantically meaningful chunks up to 384-tokens-long (as determined by 'tiktoken''s tokeniser for 'gpt-4') with the 'semchunk' Python library.
Chunks that consisted entirely of whitespace, contained 6 or more consecutive periods, ignoring whitespace (indicating that they contained a table of contents) or that were less than 96-tokens-long were discarded. A single chunk was randomly selected from each document (for those documents with a chunk to select) and subsequently cleaned of consecutive newlines, consecutive whitespace and lines consisting entirely of whitespace.
These chunks were then embedded into the following prompt, with the names of jurisdictions and types being capitalised and stripped of hyphens:
The resulting prompts were then sent to 'gpt-4' with the following hyperparameters:
'gpt-4''s responses were parsed with the regex pattern '#\s?Question:?\s+((?:\n|.)+)#\s?Answer:?\s+((?:\n|.)+)', yielding the question-answer pairs. Any malformed responses were discarded.
Changelog
---------
All notable changes to the dataset are documented in its Changelog .
This project adheres to Keep a Changelog and Semantic Versioning.
Licence
-------
The dataset is distributed under the same licence as the Open Australian Legal Corpus.
If you've relied on the dataset for your work, please cite:
Acknowledgements
----------------
In the spirit of reconciliation, the author acknowledges the Traditional Custodians of Country throughout Australia and their connections to land, sea and community. He pays his respect to their Elders past and present and extends that respect to all Aboriginal and Torres Strait Islander peoples today.
The author thanks Matthew Altenberg, who gave him the idea of using 'gpt-4' to synthesise questions and answers from the Open Australian Legal Corpus.
The author also acknowledges the creators of the many Python libraries relied upon in the creation of the dataset.
Finally, the author is eternally grateful for the endless support of his wife and her willingness to put up with many a late night spent writing code and quashing bugs.
| []
| [
"TAGS\n#task_categories-question-answering #task_categories-text-generation #task_categories-text2text-generation #task_ids-closed-domain-qa #annotations_creators-machine-generated #language_creators-machine-generated #size_categories-1K<n<10K #source_datasets-umarbutler/open-australian-legal-corpus #language-English #license-other #law #legal #australia #question-answering #qa #question-answer #text-generation #llm #chatbot #conversational-ai #generative-ai #natural-language-understanding #fine-tuning #doi-10.57967/hf/1479 #region-us \n"
]
| [
190
]
| [
"passage: TAGS\n#task_categories-question-answering #task_categories-text-generation #task_categories-text2text-generation #task_ids-closed-domain-qa #annotations_creators-machine-generated #language_creators-machine-generated #size_categories-1K<n<10K #source_datasets-umarbutler/open-australian-legal-corpus #language-English #license-other #law #legal #australia #question-answering #qa #question-answer #text-generation #llm #chatbot #conversational-ai #generative-ai #natural-language-understanding #fine-tuning #doi-10.57967/hf/1479 #region-us \n"
]
|
cfc73d180b0baba790abd076f372e638d52a63ff | # Jurnal DBP
## Jurnal Bahasa
392 articles
## Jurnal Melayu
146 articles
## Jurnal Kanun
224 articles
## Jurnal Malay Literature
175 articles
Total articles: 937 articles | syafie-nzm/jurnaldbp_pdflink | [
"region:us"
]
| 2023-11-18T10:57:49+00:00 | {} | 2023-11-18T11:07:17+00:00 | []
| []
| TAGS
#region-us
| # Jurnal DBP
## Jurnal Bahasa
392 articles
## Jurnal Melayu
146 articles
## Jurnal Kanun
224 articles
## Jurnal Malay Literature
175 articles
Total articles: 937 articles | [
"# Jurnal DBP",
"## Jurnal Bahasa \n392 articles",
"## Jurnal Melayu \n146 articles",
"## Jurnal Kanun \n224 articles",
"## Jurnal Malay Literature \n175 articles \n\nTotal articles: 937 articles"
]
| [
"TAGS\n#region-us \n",
"# Jurnal DBP",
"## Jurnal Bahasa \n392 articles",
"## Jurnal Melayu \n146 articles",
"## Jurnal Kanun \n224 articles",
"## Jurnal Malay Literature \n175 articles \n\nTotal articles: 937 articles"
]
| [
6,
4,
6,
5,
5,
14
]
| [
"passage: TAGS\n#region-us \n# Jurnal DBP## Jurnal Bahasa \n392 articles## Jurnal Melayu \n146 articles## Jurnal Kanun \n224 articles## Jurnal Malay Literature \n175 articles \n\nTotal articles: 937 articles"
]
|
1dd2a3163f8a199d5de9e7f1ac8012a333b897ed | # Dataset Card for "truthfulqa-sft"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jzjiao/truthfulqa-sft | [
"region:us"
]
| 2023-11-18T11:13:14+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "ground_truth_output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3361565, "num_examples": 5918}], "download_size": 335400, "dataset_size": 3361565}} | 2023-11-18T11:13:28+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "truthfulqa-sft"
More Information needed | [
"# Dataset Card for \"truthfulqa-sft\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"truthfulqa-sft\"\n\nMore Information needed"
]
| [
6,
17
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"truthfulqa-sft\"\n\nMore Information needed"
]
|
ef461f317abe627414d30e64c32ba361098386e5 |
# Bangumi Image Base of Helck
This is the image base of bangumi Helck, we detected 45 characters, 2595 images in total. The full dataset is [here](all.zip).
**Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual.** If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| # | Images | Download | Preview 1 | Preview 2 | Preview 3 | Preview 4 | Preview 5 | Preview 6 | Preview 7 | Preview 8 |
|:------|---------:|:---------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|:-------------------------------|
| 0 | 570 | [Download](0/dataset.zip) |  |  |  |  |  |  |  |  |
| 1 | 50 | [Download](1/dataset.zip) |  |  |  |  |  |  |  |  |
| 2 | 39 | [Download](2/dataset.zip) |  |  |  |  |  |  |  |  |
| 3 | 38 | [Download](3/dataset.zip) |  |  |  |  |  |  |  |  |
| 4 | 17 | [Download](4/dataset.zip) |  |  |  |  |  |  |  |  |
| 5 | 96 | [Download](5/dataset.zip) |  |  |  |  |  |  |  |  |
| 6 | 25 | [Download](6/dataset.zip) |  |  |  |  |  |  |  |  |
| 7 | 9 | [Download](7/dataset.zip) |  |  |  |  |  |  |  |  |
| 8 | 60 | [Download](8/dataset.zip) |  |  |  |  |  |  |  |  |
| 9 | 30 | [Download](9/dataset.zip) |  |  |  |  |  |  |  |  |
| 10 | 42 | [Download](10/dataset.zip) |  |  |  |  |  |  |  |  |
| 11 | 9 | [Download](11/dataset.zip) |  |  |  |  |  |  |  |  |
| 12 | 17 | [Download](12/dataset.zip) |  |  |  |  |  |  |  |  |
| 13 | 12 | [Download](13/dataset.zip) |  |  |  |  |  |  |  |  |
| 14 | 15 | [Download](14/dataset.zip) |  |  |  |  |  |  |  |  |
| 15 | 17 | [Download](15/dataset.zip) |  |  |  |  |  |  |  |  |
| 16 | 63 | [Download](16/dataset.zip) |  |  |  |  |  |  |  |  |
| 17 | 12 | [Download](17/dataset.zip) |  |  |  |  |  |  |  |  |
| 18 | 118 | [Download](18/dataset.zip) |  |  |  |  |  |  |  |  |
| 19 | 32 | [Download](19/dataset.zip) |  |  |  |  |  |  |  |  |
| 20 | 89 | [Download](20/dataset.zip) |  |  |  |  |  |  |  |  |
| 21 | 40 | [Download](21/dataset.zip) |  |  |  |  |  |  |  |  |
| 22 | 30 | [Download](22/dataset.zip) |  |  |  |  |  |  |  |  |
| 23 | 78 | [Download](23/dataset.zip) |  |  |  |  |  |  |  |  |
| 24 | 501 | [Download](24/dataset.zip) |  |  |  |  |  |  |  |  |
| 25 | 40 | [Download](25/dataset.zip) |  |  |  |  |  |  |  |  |
| 26 | 44 | [Download](26/dataset.zip) |  |  |  |  |  |  |  |  |
| 27 | 19 | [Download](27/dataset.zip) |  |  |  |  |  |  |  |  |
| 28 | 20 | [Download](28/dataset.zip) |  |  |  |  |  |  |  |  |
| 29 | 19 | [Download](29/dataset.zip) |  |  |  |  |  |  |  |  |
| 30 | 17 | [Download](30/dataset.zip) |  |  |  |  |  |  |  |  |
| 31 | 44 | [Download](31/dataset.zip) |  |  |  |  |  |  |  |  |
| 32 | 7 | [Download](32/dataset.zip) |  |  |  |  |  |  |  | N/A |
| 33 | 79 | [Download](33/dataset.zip) |  |  |  |  |  |  |  |  |
| 34 | 11 | [Download](34/dataset.zip) |  |  |  |  |  |  |  |  |
| 35 | 18 | [Download](35/dataset.zip) |  |  |  |  |  |  |  |  |
| 36 | 45 | [Download](36/dataset.zip) |  |  |  |  |  |  |  |  |
| 37 | 20 | [Download](37/dataset.zip) |  |  |  |  |  |  |  |  |
| 38 | 12 | [Download](38/dataset.zip) |  |  |  |  |  |  |  |  |
| 39 | 9 | [Download](39/dataset.zip) |  |  |  |  |  |  |  |  |
| 40 | 30 | [Download](40/dataset.zip) |  |  |  |  |  |  |  |  |
| 41 | 10 | [Download](41/dataset.zip) |  |  |  |  |  |  |  |  |
| 42 | 10 | [Download](42/dataset.zip) |  |  |  |  |  |  |  |  |
| 43 | 6 | [Download](43/dataset.zip) |  |  |  |  |  |  | N/A | N/A |
| noise | 126 | [Download](-1/dataset.zip) |  |  |  |  |  |  |  |  |
| BangumiBase/helck | [
"size_categories:1K<n<10K",
"license:mit",
"art",
"region:us"
]
| 2023-11-18T11:19:17+00:00 | {"license": "mit", "size_categories": ["1K<n<10K"], "tags": ["art"]} | 2023-11-18T12:49:40+00:00 | []
| []
| TAGS
#size_categories-1K<n<10K #license-mit #art #region-us
| Bangumi Image Base of Helck
===========================
This is the image base of bangumi Helck, we detected 45 characters, 2595 images in total. The full dataset is here.
Please note that these image bases are not guaranteed to be 100% cleaned, they may be noisy actual. If you intend to manually train models using this dataset, we recommend performing necessary preprocessing on the downloaded dataset to eliminate potential noisy samples (approximately 1% probability).
Here is the characters' preview:
| []
| [
"TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
]
| [
25
]
| [
"passage: TAGS\n#size_categories-1K<n<10K #license-mit #art #region-us \n"
]
|
e8907ed4b8e7c35444eef54246bb549e4fffa3fb | # Dataset Card for "data_deduplicated_part05"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | phanvancongthanh/data_deduplicated_part05 | [
"region:us"
]
| 2023-11-18T11:31:14+00:00 | {"dataset_info": {"features": [{"name": "smiles", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3402771276, "num_examples": 70962925}], "download_size": 1792627444, "dataset_size": 3402771276}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-11-18T11:32:50+00:00 | []
| []
| TAGS
#region-us
| # Dataset Card for "data_deduplicated_part05"
More Information needed | [
"# Dataset Card for \"data_deduplicated_part05\"\n\nMore Information needed"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for \"data_deduplicated_part05\"\n\nMore Information needed"
]
| [
6,
19
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for \"data_deduplicated_part05\"\n\nMore Information needed"
]
|
aa5a38b97ba141cdc55030b59f8aa93df7d0f83c |
This is the dataset from the paper [GAST: Geometry-Aware Structure Transformer](https://openaccess.thecvf.com/content/WACV2024W/CV4EO/papers/Khomiakov_GAST_Geometry-Aware_Structure_Transformer_WACVW_2024_paper.pdf)
# Accessing the dataset
Instructions on loading and utilising the dataset are coming soon. The json file format is the canonical MSCOCO.
# Inference checkpoints
Coming soon | pihalf/ERBD | [
"license:mit",
"region:us"
]
| 2023-11-18T11:42:24+00:00 | {"license": "mit"} | 2024-01-30T09:20:24+00:00 | []
| []
| TAGS
#license-mit #region-us
|
This is the dataset from the paper GAST: Geometry-Aware Structure Transformer
# Accessing the dataset
Instructions on loading and utilising the dataset are coming soon. The json file format is the canonical MSCOCO.
# Inference checkpoints
Coming soon | [
"# Accessing the dataset\n\nInstructions on loading and utilising the dataset are coming soon. The json file format is the canonical MSCOCO.",
"# Inference checkpoints\n\nComing soon"
]
| [
"TAGS\n#license-mit #region-us \n",
"# Accessing the dataset\n\nInstructions on loading and utilising the dataset are coming soon. The json file format is the canonical MSCOCO.",
"# Inference checkpoints\n\nComing soon"
]
| [
11,
35,
10
]
| [
"passage: TAGS\n#license-mit #region-us \n# Accessing the dataset\n\nInstructions on loading and utilising the dataset are coming soon. The json file format is the canonical MSCOCO.# Inference checkpoints\n\nComing soon"
]
|
d2c166bea3dda39ae6d87336e1cbc3cbc0b4199b | # HelpSteer-filtered
This dataset is a highly filtered version of the [nvidia/HelpSteer](https://huggingface.co/datasets/nvidia/HelpSteer) dataset.
# ❓ How this dataset was filtered:
1. I calculated the sum of the columns `["helpfulness," "correctness," "coherence," "complexity," "verbosity"]` and created a new column named `sum`.
2. I changed some column names and added a **empty column** to match the Alpaca format.
3. The dataset was then filtered to include only those entries with a sum greater than or equal to 16.
# 🧐 More Information
You can find more information about the unfiltered dataset here:
- [nvidia/HelpSteer](https://huggingface.co/datasets/nvidia/HelpSteer) | Weyaxi/HelpSteer-filtered | [
"license:cc-by-4.0",
"region:us"
]
| 2023-11-18T12:57:18+00:00 | {"license": "cc-by-4.0"} | 2023-11-24T19:34:25+00:00 | []
| []
| TAGS
#license-cc-by-4.0 #region-us
| # HelpSteer-filtered
This dataset is a highly filtered version of the nvidia/HelpSteer dataset.
# How this dataset was filtered:
1. I calculated the sum of the columns '["helpfulness," "correctness," "coherence," "complexity," "verbosity"]' and created a new column named 'sum'.
2. I changed some column names and added a empty column to match the Alpaca format.
3. The dataset was then filtered to include only those entries with a sum greater than or equal to 16.
# More Information
You can find more information about the unfiltered dataset here:
- nvidia/HelpSteer | [
"# HelpSteer-filtered\n\nThis dataset is a highly filtered version of the nvidia/HelpSteer dataset.",
"# How this dataset was filtered:\n\n1. I calculated the sum of the columns '[\"helpfulness,\" \"correctness,\" \"coherence,\" \"complexity,\" \"verbosity\"]' and created a new column named 'sum'.\n\n2. I changed some column names and added a empty column to match the Alpaca format.\n \n3. The dataset was then filtered to include only those entries with a sum greater than or equal to 16.",
"# More Information\n\nYou can find more information about the unfiltered dataset here:\n\n- nvidia/HelpSteer"
]
| [
"TAGS\n#license-cc-by-4.0 #region-us \n",
"# HelpSteer-filtered\n\nThis dataset is a highly filtered version of the nvidia/HelpSteer dataset.",
"# How this dataset was filtered:\n\n1. I calculated the sum of the columns '[\"helpfulness,\" \"correctness,\" \"coherence,\" \"complexity,\" \"verbosity\"]' and created a new column named 'sum'.\n\n2. I changed some column names and added a empty column to match the Alpaca format.\n \n3. The dataset was then filtered to include only those entries with a sum greater than or equal to 16.",
"# More Information\n\nYou can find more information about the unfiltered dataset here:\n\n- nvidia/HelpSteer"
]
| [
15,
29,
111,
26
]
| [
"passage: TAGS\n#license-cc-by-4.0 #region-us \n# HelpSteer-filtered\n\nThis dataset is a highly filtered version of the nvidia/HelpSteer dataset.# How this dataset was filtered:\n\n1. I calculated the sum of the columns '[\"helpfulness,\" \"correctness,\" \"coherence,\" \"complexity,\" \"verbosity\"]' and created a new column named 'sum'.\n\n2. I changed some column names and added a empty column to match the Alpaca format.\n \n3. The dataset was then filtered to include only those entries with a sum greater than or equal to 16.# More Information\n\nYou can find more information about the unfiltered dataset here:\n\n- nvidia/HelpSteer"
]
|
259de4cad9b258f8d867b6ce297f4c841fd53be8 | french law about STAD | john97843/fr-law-stad | [
"region:us"
]
| 2023-11-18T13:01:34+00:00 | {} | 2023-11-18T13:02:30+00:00 | []
| []
| TAGS
#region-us
| french law about STAD | []
| [
"TAGS\n#region-us \n"
]
| [
6
]
| [
"passage: TAGS\n#region-us \n"
]
|
e070967446c5c58b4c7268ea9106d328a8f897fd | ## Methadology
Used GPT-3.5 with a translation prompt to give two versions of translation in Norwegian from English - normal and "more natural". Context is also generated.
## Data
Input sample format:
```
Oversett til Norsk:
{text}
```
Each output sample is formatted like this:
```
Kontekst: {a, b, c}
Oversettelse:
{translation}
Mer naturlig:
{improved_translation}
```
## Future work
Dataset will be used to train LLM-based translation models based on LLama 2 and similar to try to rival DeepL and ChatGPT machine translation.
## Dataset used
Original english text from: [nampdn-ai/tiny-textbooks](https://huggingface.co/datasets/nampdn-ai/tiny-textbooks)
### Author Contaxt
[email protected] | thevox/en-nb-15k | [
"task_categories:translation",
"size_categories:10K<n<100K",
"language:en",
"language:nb",
"language:no",
"license:mpl-2.0",
"region:us"
]
| 2023-11-18T13:18:02+00:00 | {"language": ["en", "nb", "no"], "license": "mpl-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["translation"], "pretty_name": "English-Norwegian Translation"} | 2023-11-18T13:18:52+00:00 | []
| [
"en",
"nb",
"no"
]
| TAGS
#task_categories-translation #size_categories-10K<n<100K #language-English #language-Norwegian Bokmål #language-Norwegian #license-mpl-2.0 #region-us
| ## Methadology
Used GPT-3.5 with a translation prompt to give two versions of translation in Norwegian from English - normal and "more natural". Context is also generated.
## Data
Input sample format:
Each output sample is formatted like this:
## Future work
Dataset will be used to train LLM-based translation models based on LLama 2 and similar to try to rival DeepL and ChatGPT machine translation.
## Dataset used
Original english text from: nampdn-ai/tiny-textbooks
### Author Contaxt
jonaslsa@URL | [
"## Methadology\nUsed GPT-3.5 with a translation prompt to give two versions of translation in Norwegian from English - normal and \"more natural\". Context is also generated.",
"## Data\nInput sample format:\n\nEach output sample is formatted like this:",
"## Future work\nDataset will be used to train LLM-based translation models based on LLama 2 and similar to try to rival DeepL and ChatGPT machine translation.",
"## Dataset used\nOriginal english text from: nampdn-ai/tiny-textbooks",
"### Author Contaxt\njonaslsa@URL"
]
| [
"TAGS\n#task_categories-translation #size_categories-10K<n<100K #language-English #language-Norwegian Bokmål #language-Norwegian #license-mpl-2.0 #region-us \n",
"## Methadology\nUsed GPT-3.5 with a translation prompt to give two versions of translation in Norwegian from English - normal and \"more natural\". Context is also generated.",
"## Data\nInput sample format:\n\nEach output sample is formatted like this:",
"## Future work\nDataset will be used to train LLM-based translation models based on LLama 2 and similar to try to rival DeepL and ChatGPT machine translation.",
"## Dataset used\nOriginal english text from: nampdn-ai/tiny-textbooks",
"### Author Contaxt\njonaslsa@URL"
]
| [
53,
39,
16,
36,
20,
10
]
| [
"passage: TAGS\n#task_categories-translation #size_categories-10K<n<100K #language-English #language-Norwegian Bokmål #language-Norwegian #license-mpl-2.0 #region-us \n## Methadology\nUsed GPT-3.5 with a translation prompt to give two versions of translation in Norwegian from English - normal and \"more natural\". Context is also generated.## Data\nInput sample format:\n\nEach output sample is formatted like this:## Future work\nDataset will be used to train LLM-based translation models based on LLama 2 and similar to try to rival DeepL and ChatGPT machine translation.## Dataset used\nOriginal english text from: nampdn-ai/tiny-textbooks### Author Contaxt\njonaslsa@URL"
]
|
d409fa4fc709d802318b992ce4dd3cb559de40d4 |
## 导言
这是目前公开的沐雪训练集,一共200条,包含了自我认知,情感对话,对话风格等类。随着沐雪的发展,以后还会有更多的训练集公开
## 许可
本训练集目前使用使用CC-BY-NC-4.0,也就是说,**除了商业用途,并在著名作者的情况下,您可以以任何方式使用此训练集**,希望各位早日造出自己的沐雪!
## 来源
Twitter评论区、Bilibili直播弹幕、豆瓣、[SelfCognition](https://github.com/hiyouga/ChatGLM-Efficient-Tuning/blob/main/data/self_cognition.json)
| Moemu/Muice-Dataset | [
"task_categories:question-answering",
"task_categories:text-generation",
"task_categories:conversational",
"size_categories:n<1k",
"language:zh",
"license:cc-by-nc-4.0",
"ChatGLM",
"region:us"
]
| 2023-11-18T13:45:13+00:00 | {"language": ["zh"], "license": "cc-by-nc-4.0", "size_categories": ["n<1k"], "task_categories": ["question-answering", "text-generation", "conversational"], "tags": ["ChatGLM"]} | 2024-01-26T11:34:48+00:00 | []
| [
"zh"
]
| TAGS
#task_categories-question-answering #task_categories-text-generation #task_categories-conversational #size_categories-n<1k #language-Chinese #license-cc-by-nc-4.0 #ChatGLM #region-us
|
## 导言
这是目前公开的沐雪训练集,一共200条,包含了自我认知,情感对话,对话风格等类。随着沐雪的发展,以后还会有更多的训练集公开
## 许可
本训练集目前使用使用CC-BY-NC-4.0,也就是说,除了商业用途,并在著名作者的情况下,您可以以任何方式使用此训练集,希望各位早日造出自己的沐雪!
## 来源
Twitter评论区、Bilibili直播弹幕、豆瓣、SelfCognition
| [
"## 导言\r\n\r\n这是目前公开的沐雪训练集,一共200条,包含了自我认知,情感对话,对话风格等类。随着沐雪的发展,以后还会有更多的训练集公开",
"## 许可\r\n\r\n本训练集目前使用使用CC-BY-NC-4.0,也就是说,除了商业用途,并在著名作者的情况下,您可以以任何方式使用此训练集,希望各位早日造出自己的沐雪!",
"## 来源\r\n\r\nTwitter评论区、Bilibili直播弹幕、豆瓣、SelfCognition"
]
| [
"TAGS\n#task_categories-question-answering #task_categories-text-generation #task_categories-conversational #size_categories-n<1k #language-Chinese #license-cc-by-nc-4.0 #ChatGLM #region-us \n",
"## 导言\r\n\r\n这是目前公开的沐雪训练集,一共200条,包含了自我认知,情感对话,对话风格等类。随着沐雪的发展,以后还会有更多的训练集公开",
"## 许可\r\n\r\n本训练集目前使用使用CC-BY-NC-4.0,也就是说,除了商业用途,并在著名作者的情况下,您可以以任何方式使用此训练集,希望各位早日造出自己的沐雪!",
"## 来源\r\n\r\nTwitter评论区、Bilibili直播弹幕、豆瓣、SelfCognition"
]
| [
69,
43,
47,
21
]
| [
"passage: TAGS\n#task_categories-question-answering #task_categories-text-generation #task_categories-conversational #size_categories-n<1k #language-Chinese #license-cc-by-nc-4.0 #ChatGLM #region-us \n## 导言\r\n\r\n这是目前公开的沐雪训练集,一共200条,包含了自我认知,情感对话,对话风格等类。随着沐雪的发展,以后还会有更多的训练集公开## 许可\r\n\r\n本训练集目前使用使用CC-BY-NC-4.0,也就是说,除了商业用途,并在著名作者的情况下,您可以以任何方式使用此训练集,希望各位早日造出自己的沐雪!## 来源\r\n\r\nTwitter评论区、Bilibili直播弹幕、豆瓣、SelfCognition"
]
|
49fa5280f15eec63b9227c043c40e463926520df |
# Dataset Card for Evaluation run of BEE-spoke-data/smol_llama-101M-GQA
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/BEE-spoke-data/smol_llama-101M-GQA
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [BEE-spoke-data/smol_llama-101M-GQA](https://huggingface.co/BEE-spoke-data/smol_llama-101M-GQA) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_BEE-spoke-data__smol_llama-101M-GQA_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-18T22:28:51.599216](https://huggingface.co/datasets/open-llm-leaderboard/details_BEE-spoke-data__smol_llama-101M-GQA_public/blob/main/results_2023-11-18T22-28-51.599216.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.243457145392589,
"acc_stderr": 0.030232451207481324,
"acc_norm": 0.2440460125546807,
"acc_norm_stderr": 0.03099855104029764,
"mc1": 0.2484700122399021,
"mc1_stderr": 0.01512742709652069,
"mc2": 0.45801037294841895,
"mc2_stderr": 0.01513659314586415,
"em": 0.0016778523489932886,
"em_stderr": 0.0004191330178826889,
"f1": 0.03420931208053696,
"f1_stderr": 0.0011030675027452802
},
"harness|arc:challenge|25": {
"acc": 0.18088737201365188,
"acc_stderr": 0.011248574467407024,
"acc_norm": 0.23464163822525597,
"acc_norm_stderr": 0.012383873560768675
},
"harness|hellaswag|10": {
"acc": 0.2789285002987453,
"acc_stderr": 0.004475557360359701,
"acc_norm": 0.287293367855009,
"acc_norm_stderr": 0.004515748192605715
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.23,
"acc_stderr": 0.04229525846816506,
"acc_norm": 0.23,
"acc_norm_stderr": 0.04229525846816506
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.2740740740740741,
"acc_stderr": 0.03853254836552004,
"acc_norm": 0.2740740740740741,
"acc_norm_stderr": 0.03853254836552004
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.20394736842105263,
"acc_stderr": 0.03279000406310052,
"acc_norm": 0.20394736842105263,
"acc_norm_stderr": 0.03279000406310052
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.19,
"acc_stderr": 0.03942772444036625,
"acc_norm": 0.19,
"acc_norm_stderr": 0.03942772444036625
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.22641509433962265,
"acc_stderr": 0.025757559893106737,
"acc_norm": 0.22641509433962265,
"acc_norm_stderr": 0.025757559893106737
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.2708333333333333,
"acc_stderr": 0.037161774375660164,
"acc_norm": 0.2708333333333333,
"acc_norm_stderr": 0.037161774375660164
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.15,
"acc_stderr": 0.035887028128263714,
"acc_norm": 0.15,
"acc_norm_stderr": 0.035887028128263714
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.18,
"acc_stderr": 0.038612291966536955,
"acc_norm": 0.18,
"acc_norm_stderr": 0.038612291966536955
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.27,
"acc_stderr": 0.0446196043338474,
"acc_norm": 0.27,
"acc_norm_stderr": 0.0446196043338474
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.1907514450867052,
"acc_stderr": 0.029957851329869337,
"acc_norm": 0.1907514450867052,
"acc_norm_stderr": 0.029957851329869337
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.22549019607843138,
"acc_stderr": 0.041583075330832865,
"acc_norm": 0.22549019607843138,
"acc_norm_stderr": 0.041583075330832865
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.26,
"acc_stderr": 0.04408440022768079,
"acc_norm": 0.26,
"acc_norm_stderr": 0.04408440022768079
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.23829787234042554,
"acc_stderr": 0.02785125297388977,
"acc_norm": 0.23829787234042554,
"acc_norm_stderr": 0.02785125297388977
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.24561403508771928,
"acc_stderr": 0.0404933929774814,
"acc_norm": 0.24561403508771928,
"acc_norm_stderr": 0.0404933929774814
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.2413793103448276,
"acc_stderr": 0.03565998174135303,
"acc_norm": 0.2413793103448276,
"acc_norm_stderr": 0.03565998174135303
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.17989417989417988,
"acc_stderr": 0.019782119832766426,
"acc_norm": 0.17989417989417988,
"acc_norm_stderr": 0.019782119832766426
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.24603174603174602,
"acc_stderr": 0.038522733649243156,
"acc_norm": 0.24603174603174602,
"acc_norm_stderr": 0.038522733649243156
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.23,
"acc_stderr": 0.042295258468165065,
"acc_norm": 0.23,
"acc_norm_stderr": 0.042295258468165065
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.3064516129032258,
"acc_stderr": 0.026226485652553873,
"acc_norm": 0.3064516129032258,
"acc_norm_stderr": 0.026226485652553873
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.28078817733990147,
"acc_stderr": 0.0316185633535861,
"acc_norm": 0.28078817733990147,
"acc_norm_stderr": 0.0316185633535861
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.16,
"acc_stderr": 0.03684529491774709,
"acc_norm": 0.16,
"acc_norm_stderr": 0.03684529491774709
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.22424242424242424,
"acc_stderr": 0.03256866661681102,
"acc_norm": 0.22424242424242424,
"acc_norm_stderr": 0.03256866661681102
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.26262626262626265,
"acc_stderr": 0.031353050095330855,
"acc_norm": 0.26262626262626265,
"acc_norm_stderr": 0.031353050095330855
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.34196891191709844,
"acc_stderr": 0.03423465100104281,
"acc_norm": 0.34196891191709844,
"acc_norm_stderr": 0.03423465100104281
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.30256410256410254,
"acc_stderr": 0.023290888053772725,
"acc_norm": 0.30256410256410254,
"acc_norm_stderr": 0.023290888053772725
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.2740740740740741,
"acc_stderr": 0.027195934804085622,
"acc_norm": 0.2740740740740741,
"acc_norm_stderr": 0.027195934804085622
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.2815126050420168,
"acc_stderr": 0.02921354941437216,
"acc_norm": 0.2815126050420168,
"acc_norm_stderr": 0.02921354941437216
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.1986754966887417,
"acc_stderr": 0.03257847384436775,
"acc_norm": 0.1986754966887417,
"acc_norm_stderr": 0.03257847384436775
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.22752293577981653,
"acc_stderr": 0.017974463578776502,
"acc_norm": 0.22752293577981653,
"acc_norm_stderr": 0.017974463578776502
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4074074074074074,
"acc_stderr": 0.03350991604696043,
"acc_norm": 0.4074074074074074,
"acc_norm_stderr": 0.03350991604696043
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.24509803921568626,
"acc_stderr": 0.030190282453501943,
"acc_norm": 0.24509803921568626,
"acc_norm_stderr": 0.030190282453501943
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.2489451476793249,
"acc_stderr": 0.028146970599422644,
"acc_norm": 0.2489451476793249,
"acc_norm_stderr": 0.028146970599422644
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.21973094170403587,
"acc_stderr": 0.027790177064383605,
"acc_norm": 0.21973094170403587,
"acc_norm_stderr": 0.027790177064383605
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.20610687022900764,
"acc_stderr": 0.03547771004159463,
"acc_norm": 0.20610687022900764,
"acc_norm_stderr": 0.03547771004159463
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.2892561983471074,
"acc_stderr": 0.041391127276354626,
"acc_norm": 0.2892561983471074,
"acc_norm_stderr": 0.041391127276354626
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.24074074074074073,
"acc_stderr": 0.041331194402438376,
"acc_norm": 0.24074074074074073,
"acc_norm_stderr": 0.041331194402438376
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.24539877300613497,
"acc_stderr": 0.03380939813943354,
"acc_norm": 0.24539877300613497,
"acc_norm_stderr": 0.03380939813943354
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.20535714285714285,
"acc_stderr": 0.03834241021419073,
"acc_norm": 0.20535714285714285,
"acc_norm_stderr": 0.03834241021419073
},
"harness|hendrycksTest-management|5": {
"acc": 0.21359223300970873,
"acc_stderr": 0.040580420156460344,
"acc_norm": 0.21359223300970873,
"acc_norm_stderr": 0.040580420156460344
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.19658119658119658,
"acc_stderr": 0.02603538609895129,
"acc_norm": 0.19658119658119658,
"acc_norm_stderr": 0.02603538609895129
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.24,
"acc_stderr": 0.04292346959909283,
"acc_norm": 0.24,
"acc_norm_stderr": 0.04292346959909283
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.26436781609195403,
"acc_stderr": 0.01576998484069052,
"acc_norm": 0.26436781609195403,
"acc_norm_stderr": 0.01576998484069052
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.2543352601156069,
"acc_stderr": 0.023445826276545546,
"acc_norm": 0.2543352601156069,
"acc_norm_stderr": 0.023445826276545546
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.24692737430167597,
"acc_stderr": 0.014422292204808835,
"acc_norm": 0.24692737430167597,
"acc_norm_stderr": 0.014422292204808835
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.2222222222222222,
"acc_stderr": 0.023805186524888135,
"acc_norm": 0.2222222222222222,
"acc_norm_stderr": 0.023805186524888135
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.28938906752411575,
"acc_stderr": 0.025755865922632924,
"acc_norm": 0.28938906752411575,
"acc_norm_stderr": 0.025755865922632924
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.24074074074074073,
"acc_stderr": 0.02378858355165855,
"acc_norm": 0.24074074074074073,
"acc_norm_stderr": 0.02378858355165855
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.2695035460992908,
"acc_stderr": 0.02646903681859063,
"acc_norm": 0.2695035460992908,
"acc_norm_stderr": 0.02646903681859063
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.2457627118644068,
"acc_stderr": 0.01099615663514269,
"acc_norm": 0.2457627118644068,
"acc_norm_stderr": 0.01099615663514269
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.27941176470588236,
"acc_stderr": 0.027257202606114944,
"acc_norm": 0.27941176470588236,
"acc_norm_stderr": 0.027257202606114944
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.26633986928104575,
"acc_stderr": 0.0178831881346672,
"acc_norm": 0.26633986928104575,
"acc_norm_stderr": 0.0178831881346672
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.21818181818181817,
"acc_stderr": 0.03955932861795833,
"acc_norm": 0.21818181818181817,
"acc_norm_stderr": 0.03955932861795833
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.33877551020408164,
"acc_stderr": 0.030299506562154185,
"acc_norm": 0.33877551020408164,
"acc_norm_stderr": 0.030299506562154185
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.21393034825870647,
"acc_stderr": 0.028996909693328923,
"acc_norm": 0.21393034825870647,
"acc_norm_stderr": 0.028996909693328923
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.24,
"acc_stderr": 0.04292346959909282,
"acc_norm": 0.24,
"acc_norm_stderr": 0.04292346959909282
},
"harness|hendrycksTest-virology|5": {
"acc": 0.21084337349397592,
"acc_stderr": 0.0317555478662992,
"acc_norm": 0.21084337349397592,
"acc_norm_stderr": 0.0317555478662992
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.2046783625730994,
"acc_stderr": 0.03094445977853321,
"acc_norm": 0.2046783625730994,
"acc_norm_stderr": 0.03094445977853321
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2484700122399021,
"mc1_stderr": 0.01512742709652069,
"mc2": 0.45801037294841895,
"mc2_stderr": 0.01513659314586415
},
"harness|winogrande|5": {
"acc": 0.5067087608524072,
"acc_stderr": 0.014051220692330352
},
"harness|drop|3": {
"em": 0.0016778523489932886,
"em_stderr": 0.0004191330178826889,
"f1": 0.03420931208053696,
"f1_stderr": 0.0011030675027452802
},
"harness|gsm8k|5": {
"acc": 0.0075815011372251705,
"acc_stderr": 0.0023892815120772244
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_BEE-spoke-data__smol_llama-101M-GQA | [
"region:us"
]
| 2023-11-18T14:06:15+00:00 | {"pretty_name": "Evaluation run of BEE-spoke-data/smol_llama-101M-GQA", "dataset_summary": "Dataset automatically created during the evaluation run of model [BEE-spoke-data/smol_llama-101M-GQA](https://huggingface.co/BEE-spoke-data/smol_llama-101M-GQA) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_BEE-spoke-data__smol_llama-101M-GQA_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-18T22:28:51.599216](https://huggingface.co/datasets/open-llm-leaderboard/details_BEE-spoke-data__smol_llama-101M-GQA_public/blob/main/results_2023-11-18T22-28-51.599216.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.243457145392589,\n \"acc_stderr\": 0.030232451207481324,\n \"acc_norm\": 0.2440460125546807,\n \"acc_norm_stderr\": 0.03099855104029764,\n \"mc1\": 0.2484700122399021,\n \"mc1_stderr\": 0.01512742709652069,\n \"mc2\": 0.45801037294841895,\n \"mc2_stderr\": 0.01513659314586415,\n \"em\": 0.0016778523489932886,\n \"em_stderr\": 0.0004191330178826889,\n \"f1\": 0.03420931208053696,\n \"f1_stderr\": 0.0011030675027452802\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.18088737201365188,\n \"acc_stderr\": 0.011248574467407024,\n \"acc_norm\": 0.23464163822525597,\n \"acc_norm_stderr\": 0.012383873560768675\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.2789285002987453,\n \"acc_stderr\": 0.004475557360359701,\n \"acc_norm\": 0.287293367855009,\n \"acc_norm_stderr\": 0.004515748192605715\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.23,\n \"acc_stderr\": 0.04229525846816506,\n \"acc_norm\": 0.23,\n \"acc_norm_stderr\": 0.04229525846816506\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.2740740740740741,\n \"acc_stderr\": 0.03853254836552004,\n \"acc_norm\": 0.2740740740740741,\n \"acc_norm_stderr\": 0.03853254836552004\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.20394736842105263,\n \"acc_stderr\": 0.03279000406310052,\n \"acc_norm\": 0.20394736842105263,\n \"acc_norm_stderr\": 0.03279000406310052\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.19,\n \"acc_stderr\": 0.03942772444036625,\n \"acc_norm\": 0.19,\n \"acc_norm_stderr\": 0.03942772444036625\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.22641509433962265,\n \"acc_stderr\": 0.025757559893106737,\n \"acc_norm\": 0.22641509433962265,\n \"acc_norm_stderr\": 0.025757559893106737\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.2708333333333333,\n \"acc_stderr\": 0.037161774375660164,\n \"acc_norm\": 0.2708333333333333,\n \"acc_norm_stderr\": 0.037161774375660164\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.15,\n \"acc_stderr\": 0.035887028128263714,\n \"acc_norm\": 0.15,\n \"acc_norm_stderr\": 0.035887028128263714\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.18,\n \"acc_stderr\": 0.038612291966536955,\n \"acc_norm\": 0.18,\n \"acc_norm_stderr\": 0.038612291966536955\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.0446196043338474,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.0446196043338474\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.1907514450867052,\n \"acc_stderr\": 0.029957851329869337,\n \"acc_norm\": 0.1907514450867052,\n \"acc_norm_stderr\": 0.029957851329869337\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.22549019607843138,\n \"acc_stderr\": 0.041583075330832865,\n \"acc_norm\": 0.22549019607843138,\n \"acc_norm_stderr\": 0.041583075330832865\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.04408440022768079,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.04408440022768079\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.23829787234042554,\n \"acc_stderr\": 0.02785125297388977,\n \"acc_norm\": 0.23829787234042554,\n \"acc_norm_stderr\": 0.02785125297388977\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.24561403508771928,\n \"acc_stderr\": 0.0404933929774814,\n \"acc_norm\": 0.24561403508771928,\n \"acc_norm_stderr\": 0.0404933929774814\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.2413793103448276,\n \"acc_stderr\": 0.03565998174135303,\n \"acc_norm\": 0.2413793103448276,\n \"acc_norm_stderr\": 0.03565998174135303\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.17989417989417988,\n \"acc_stderr\": 0.019782119832766426,\n \"acc_norm\": 0.17989417989417988,\n \"acc_norm_stderr\": 0.019782119832766426\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.24603174603174602,\n \"acc_stderr\": 0.038522733649243156,\n \"acc_norm\": 0.24603174603174602,\n \"acc_norm_stderr\": 0.038522733649243156\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.23,\n \"acc_stderr\": 0.042295258468165065,\n \"acc_norm\": 0.23,\n \"acc_norm_stderr\": 0.042295258468165065\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.3064516129032258,\n \"acc_stderr\": 0.026226485652553873,\n \"acc_norm\": 0.3064516129032258,\n \"acc_norm_stderr\": 0.026226485652553873\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.28078817733990147,\n \"acc_stderr\": 0.0316185633535861,\n \"acc_norm\": 0.28078817733990147,\n \"acc_norm_stderr\": 0.0316185633535861\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.16,\n \"acc_stderr\": 0.03684529491774709,\n \"acc_norm\": 0.16,\n \"acc_norm_stderr\": 0.03684529491774709\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.22424242424242424,\n \"acc_stderr\": 0.03256866661681102,\n \"acc_norm\": 0.22424242424242424,\n \"acc_norm_stderr\": 0.03256866661681102\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.26262626262626265,\n \"acc_stderr\": 0.031353050095330855,\n \"acc_norm\": 0.26262626262626265,\n \"acc_norm_stderr\": 0.031353050095330855\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.34196891191709844,\n \"acc_stderr\": 0.03423465100104281,\n \"acc_norm\": 0.34196891191709844,\n \"acc_norm_stderr\": 0.03423465100104281\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.30256410256410254,\n \"acc_stderr\": 0.023290888053772725,\n \"acc_norm\": 0.30256410256410254,\n \"acc_norm_stderr\": 0.023290888053772725\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.2740740740740741,\n \"acc_stderr\": 0.027195934804085622,\n \"acc_norm\": 0.2740740740740741,\n \"acc_norm_stderr\": 0.027195934804085622\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.2815126050420168,\n \"acc_stderr\": 0.02921354941437216,\n \"acc_norm\": 0.2815126050420168,\n \"acc_norm_stderr\": 0.02921354941437216\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.1986754966887417,\n \"acc_stderr\": 0.03257847384436775,\n \"acc_norm\": 0.1986754966887417,\n \"acc_norm_stderr\": 0.03257847384436775\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.22752293577981653,\n \"acc_stderr\": 0.017974463578776502,\n \"acc_norm\": 0.22752293577981653,\n \"acc_norm_stderr\": 0.017974463578776502\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4074074074074074,\n \"acc_stderr\": 0.03350991604696043,\n \"acc_norm\": 0.4074074074074074,\n \"acc_norm_stderr\": 0.03350991604696043\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.24509803921568626,\n \"acc_stderr\": 0.030190282453501943,\n \"acc_norm\": 0.24509803921568626,\n \"acc_norm_stderr\": 0.030190282453501943\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.2489451476793249,\n \"acc_stderr\": 0.028146970599422644,\n \"acc_norm\": 0.2489451476793249,\n \"acc_norm_stderr\": 0.028146970599422644\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.21973094170403587,\n \"acc_stderr\": 0.027790177064383605,\n \"acc_norm\": 0.21973094170403587,\n \"acc_norm_stderr\": 0.027790177064383605\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.20610687022900764,\n \"acc_stderr\": 0.03547771004159463,\n \"acc_norm\": 0.20610687022900764,\n \"acc_norm_stderr\": 0.03547771004159463\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.2892561983471074,\n \"acc_stderr\": 0.041391127276354626,\n \"acc_norm\": 0.2892561983471074,\n \"acc_norm_stderr\": 0.041391127276354626\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.24074074074074073,\n \"acc_stderr\": 0.041331194402438376,\n \"acc_norm\": 0.24074074074074073,\n \"acc_norm_stderr\": 0.041331194402438376\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.24539877300613497,\n \"acc_stderr\": 0.03380939813943354,\n \"acc_norm\": 0.24539877300613497,\n \"acc_norm_stderr\": 0.03380939813943354\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.20535714285714285,\n \"acc_stderr\": 0.03834241021419073,\n \"acc_norm\": 0.20535714285714285,\n \"acc_norm_stderr\": 0.03834241021419073\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.21359223300970873,\n \"acc_stderr\": 0.040580420156460344,\n \"acc_norm\": 0.21359223300970873,\n \"acc_norm_stderr\": 0.040580420156460344\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.19658119658119658,\n \"acc_stderr\": 0.02603538609895129,\n \"acc_norm\": 0.19658119658119658,\n \"acc_norm_stderr\": 0.02603538609895129\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.24,\n \"acc_stderr\": 0.04292346959909283,\n \"acc_norm\": 0.24,\n \"acc_norm_stderr\": 0.04292346959909283\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.26436781609195403,\n \"acc_stderr\": 0.01576998484069052,\n \"acc_norm\": 0.26436781609195403,\n \"acc_norm_stderr\": 0.01576998484069052\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.2543352601156069,\n \"acc_stderr\": 0.023445826276545546,\n \"acc_norm\": 0.2543352601156069,\n \"acc_norm_stderr\": 0.023445826276545546\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.24692737430167597,\n \"acc_stderr\": 0.014422292204808835,\n \"acc_norm\": 0.24692737430167597,\n \"acc_norm_stderr\": 0.014422292204808835\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.2222222222222222,\n \"acc_stderr\": 0.023805186524888135,\n \"acc_norm\": 0.2222222222222222,\n \"acc_norm_stderr\": 0.023805186524888135\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.28938906752411575,\n \"acc_stderr\": 0.025755865922632924,\n \"acc_norm\": 0.28938906752411575,\n \"acc_norm_stderr\": 0.025755865922632924\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.24074074074074073,\n \"acc_stderr\": 0.02378858355165855,\n \"acc_norm\": 0.24074074074074073,\n \"acc_norm_stderr\": 0.02378858355165855\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.2695035460992908,\n \"acc_stderr\": 0.02646903681859063,\n \"acc_norm\": 0.2695035460992908,\n \"acc_norm_stderr\": 0.02646903681859063\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.2457627118644068,\n \"acc_stderr\": 0.01099615663514269,\n \"acc_norm\": 0.2457627118644068,\n \"acc_norm_stderr\": 0.01099615663514269\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.27941176470588236,\n \"acc_stderr\": 0.027257202606114944,\n \"acc_norm\": 0.27941176470588236,\n \"acc_norm_stderr\": 0.027257202606114944\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.26633986928104575,\n \"acc_stderr\": 0.0178831881346672,\n \"acc_norm\": 0.26633986928104575,\n \"acc_norm_stderr\": 0.0178831881346672\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.21818181818181817,\n \"acc_stderr\": 0.03955932861795833,\n \"acc_norm\": 0.21818181818181817,\n \"acc_norm_stderr\": 0.03955932861795833\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.33877551020408164,\n \"acc_stderr\": 0.030299506562154185,\n \"acc_norm\": 0.33877551020408164,\n \"acc_norm_stderr\": 0.030299506562154185\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.21393034825870647,\n \"acc_stderr\": 0.028996909693328923,\n \"acc_norm\": 0.21393034825870647,\n \"acc_norm_stderr\": 0.028996909693328923\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.24,\n \"acc_stderr\": 0.04292346959909282,\n \"acc_norm\": 0.24,\n \"acc_norm_stderr\": 0.04292346959909282\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.21084337349397592,\n \"acc_stderr\": 0.0317555478662992,\n \"acc_norm\": 0.21084337349397592,\n \"acc_norm_stderr\": 0.0317555478662992\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.2046783625730994,\n \"acc_stderr\": 0.03094445977853321,\n \"acc_norm\": 0.2046783625730994,\n \"acc_norm_stderr\": 0.03094445977853321\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2484700122399021,\n \"mc1_stderr\": 0.01512742709652069,\n \"mc2\": 0.45801037294841895,\n \"mc2_stderr\": 0.01513659314586415\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.5067087608524072,\n \"acc_stderr\": 0.014051220692330352\n },\n \"harness|drop|3\": {\n \"em\": 0.0016778523489932886,\n \"em_stderr\": 0.0004191330178826889,\n \"f1\": 0.03420931208053696,\n \"f1_stderr\": 0.0011030675027452802\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0075815011372251705,\n \"acc_stderr\": 0.0023892815120772244\n }\n}\n```", "repo_url": "https://huggingface.co/BEE-spoke-data/smol_llama-101M-GQA", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|arc:challenge|25_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|arc:challenge|25_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|drop|3_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|drop|3_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|gsm8k|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|gsm8k|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hellaswag|10_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hellaswag|10_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-18T14-04-20.381972.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-18T22-28-51.599216.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["**/details_harness|winogrande|5_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["**/details_harness|winogrande|5_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-18T22-28-51.599216.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_18T14_04_20.381972", "path": ["results_2023-11-18T14-04-20.381972.parquet"]}, {"split": "2023_11_18T22_28_51.599216", "path": ["results_2023-11-18T22-28-51.599216.parquet"]}, {"split": "latest", "path": ["results_2023-11-18T22-28-51.599216.parquet"]}]}]} | 2023-11-18T22:31:35+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Evaluation run of BEE-spoke-data/smol_llama-101M-GQA
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model BEE-spoke-data/smol_llama-101M-GQA on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-18T22:28:51.599216(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of BEE-spoke-data/smol_llama-101M-GQA",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model BEE-spoke-data/smol_llama-101M-GQA on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-18T22:28:51.599216(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of BEE-spoke-data/smol_llama-101M-GQA",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model BEE-spoke-data/smol_llama-101M-GQA on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-18T22:28:51.599216(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
6,
28,
31,
177,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of BEE-spoke-data/smol_llama-101M-GQA## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model BEE-spoke-data/smol_llama-101M-GQA on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-18T22:28:51.599216(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
]
|
cd2efca4e1a58f81b921581deef9ead6f4b6b682 |
# Dataset Card for Evaluation run of BEE-spoke-data/smol_llama-81M-tied
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/BEE-spoke-data/smol_llama-81M-tied
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [BEE-spoke-data/smol_llama-81M-tied](https://huggingface.co/BEE-spoke-data/smol_llama-81M-tied) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_BEE-spoke-data__smol_llama-81M-tied_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-18T14:15:30.569311](https://huggingface.co/datasets/open-llm-leaderboard/details_BEE-spoke-data__smol_llama-81M-tied_public/blob/main/results_2023-11-18T14-15-30.569311.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.2402171621298779,
"acc_stderr": 0.030151316179768264,
"acc_norm": 0.24121210499496637,
"acc_norm_stderr": 0.030936061025851773,
"mc1": 0.24357405140758873,
"mc1_stderr": 0.015026354824910782,
"mc2": 0.4396548316438528,
"mc2_stderr": 0.015239790495271214,
"em": 0.0026216442953020135,
"em_stderr": 0.0005236685642965868,
"f1": 0.026351719798657795,
"f1_stderr": 0.0010276173957002664
},
"harness|arc:challenge|25": {
"acc": 0.16552901023890784,
"acc_stderr": 0.010860860440277703,
"acc_norm": 0.22184300341296928,
"acc_norm_stderr": 0.012141659068147884
},
"harness|hellaswag|10": {
"acc": 0.2765385381398128,
"acc_stderr": 0.004463721071319102,
"acc_norm": 0.2932682732523402,
"acc_norm_stderr": 0.004543299338935421
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.22,
"acc_stderr": 0.04163331998932268,
"acc_norm": 0.22,
"acc_norm_stderr": 0.04163331998932268
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.2962962962962963,
"acc_stderr": 0.03944624162501116,
"acc_norm": 0.2962962962962963,
"acc_norm_stderr": 0.03944624162501116
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.17763157894736842,
"acc_stderr": 0.031103182383123398,
"acc_norm": 0.17763157894736842,
"acc_norm_stderr": 0.031103182383123398
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.19,
"acc_stderr": 0.03942772444036623,
"acc_norm": 0.19,
"acc_norm_stderr": 0.03942772444036623
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.22641509433962265,
"acc_stderr": 0.025757559893106744,
"acc_norm": 0.22641509433962265,
"acc_norm_stderr": 0.025757559893106744
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.22916666666666666,
"acc_stderr": 0.03514697467862388,
"acc_norm": 0.22916666666666666,
"acc_norm_stderr": 0.03514697467862388
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.19,
"acc_stderr": 0.039427724440366234,
"acc_norm": 0.19,
"acc_norm_stderr": 0.039427724440366234
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.23,
"acc_stderr": 0.04229525846816505,
"acc_norm": 0.23,
"acc_norm_stderr": 0.04229525846816505
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.2,
"acc_stderr": 0.04020151261036846,
"acc_norm": 0.2,
"acc_norm_stderr": 0.04020151261036846
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.2023121387283237,
"acc_stderr": 0.03063114553919882,
"acc_norm": 0.2023121387283237,
"acc_norm_stderr": 0.03063114553919882
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.21568627450980393,
"acc_stderr": 0.04092563958237654,
"acc_norm": 0.21568627450980393,
"acc_norm_stderr": 0.04092563958237654
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.22,
"acc_stderr": 0.04163331998932269,
"acc_norm": 0.22,
"acc_norm_stderr": 0.04163331998932269
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.25957446808510637,
"acc_stderr": 0.02865917937429232,
"acc_norm": 0.25957446808510637,
"acc_norm_stderr": 0.02865917937429232
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2631578947368421,
"acc_stderr": 0.04142439719489361,
"acc_norm": 0.2631578947368421,
"acc_norm_stderr": 0.04142439719489361
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.2413793103448276,
"acc_stderr": 0.03565998174135302,
"acc_norm": 0.2413793103448276,
"acc_norm_stderr": 0.03565998174135302
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.22486772486772486,
"acc_stderr": 0.02150209607822914,
"acc_norm": 0.22486772486772486,
"acc_norm_stderr": 0.02150209607822914
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.15873015873015872,
"acc_stderr": 0.03268454013011743,
"acc_norm": 0.15873015873015872,
"acc_norm_stderr": 0.03268454013011743
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.19,
"acc_stderr": 0.03942772444036624,
"acc_norm": 0.19,
"acc_norm_stderr": 0.03942772444036624
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.2967741935483871,
"acc_stderr": 0.025988500792411898,
"acc_norm": 0.2967741935483871,
"acc_norm_stderr": 0.025988500792411898
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.29064039408866993,
"acc_stderr": 0.0319474007226554,
"acc_norm": 0.29064039408866993,
"acc_norm_stderr": 0.0319474007226554
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.27,
"acc_stderr": 0.04461960433384741,
"acc_norm": 0.27,
"acc_norm_stderr": 0.04461960433384741
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.24242424242424243,
"acc_stderr": 0.03346409881055953,
"acc_norm": 0.24242424242424243,
"acc_norm_stderr": 0.03346409881055953
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.2676767676767677,
"acc_stderr": 0.03154449888270285,
"acc_norm": 0.2676767676767677,
"acc_norm_stderr": 0.03154449888270285
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.23834196891191708,
"acc_stderr": 0.030748905363909902,
"acc_norm": 0.23834196891191708,
"acc_norm_stderr": 0.030748905363909902
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.2230769230769231,
"acc_stderr": 0.02110773012724398,
"acc_norm": 0.2230769230769231,
"acc_norm_stderr": 0.02110773012724398
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.21851851851851853,
"acc_stderr": 0.02519575225182379,
"acc_norm": 0.21851851851851853,
"acc_norm_stderr": 0.02519575225182379
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.21008403361344538,
"acc_stderr": 0.026461398717471874,
"acc_norm": 0.21008403361344538,
"acc_norm_stderr": 0.026461398717471874
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.19205298013245034,
"acc_stderr": 0.032162984205936135,
"acc_norm": 0.19205298013245034,
"acc_norm_stderr": 0.032162984205936135
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.20550458715596331,
"acc_stderr": 0.017324352325016012,
"acc_norm": 0.20550458715596331,
"acc_norm_stderr": 0.017324352325016012
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.39814814814814814,
"acc_stderr": 0.033384734032074016,
"acc_norm": 0.39814814814814814,
"acc_norm_stderr": 0.033384734032074016
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.29411764705882354,
"acc_stderr": 0.03198001660115071,
"acc_norm": 0.29411764705882354,
"acc_norm_stderr": 0.03198001660115071
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.26582278481012656,
"acc_stderr": 0.028756799629658342,
"acc_norm": 0.26582278481012656,
"acc_norm_stderr": 0.028756799629658342
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.24663677130044842,
"acc_stderr": 0.02893041312091088,
"acc_norm": 0.24663677130044842,
"acc_norm_stderr": 0.02893041312091088
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.2595419847328244,
"acc_stderr": 0.03844876139785271,
"acc_norm": 0.2595419847328244,
"acc_norm_stderr": 0.03844876139785271
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.2809917355371901,
"acc_stderr": 0.041032038305145124,
"acc_norm": 0.2809917355371901,
"acc_norm_stderr": 0.041032038305145124
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.23148148148148148,
"acc_stderr": 0.04077494709252626,
"acc_norm": 0.23148148148148148,
"acc_norm_stderr": 0.04077494709252626
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.26993865030674846,
"acc_stderr": 0.034878251684978906,
"acc_norm": 0.26993865030674846,
"acc_norm_stderr": 0.034878251684978906
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.23214285714285715,
"acc_stderr": 0.04007341809755806,
"acc_norm": 0.23214285714285715,
"acc_norm_stderr": 0.04007341809755806
},
"harness|hendrycksTest-management|5": {
"acc": 0.17475728155339806,
"acc_stderr": 0.037601780060266224,
"acc_norm": 0.17475728155339806,
"acc_norm_stderr": 0.037601780060266224
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.2222222222222222,
"acc_stderr": 0.027236013946196694,
"acc_norm": 0.2222222222222222,
"acc_norm_stderr": 0.027236013946196694
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.3,
"acc_stderr": 0.046056618647183814,
"acc_norm": 0.3,
"acc_norm_stderr": 0.046056618647183814
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.2503192848020434,
"acc_stderr": 0.01549108895149459,
"acc_norm": 0.2503192848020434,
"acc_norm_stderr": 0.01549108895149459
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.24566473988439305,
"acc_stderr": 0.02317629820399201,
"acc_norm": 0.24566473988439305,
"acc_norm_stderr": 0.02317629820399201
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2424581005586592,
"acc_stderr": 0.014333522059217889,
"acc_norm": 0.2424581005586592,
"acc_norm_stderr": 0.014333522059217889
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.2222222222222222,
"acc_stderr": 0.02380518652488815,
"acc_norm": 0.2222222222222222,
"acc_norm_stderr": 0.02380518652488815
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.20257234726688103,
"acc_stderr": 0.02282731749105968,
"acc_norm": 0.20257234726688103,
"acc_norm_stderr": 0.02282731749105968
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.24074074074074073,
"acc_stderr": 0.023788583551658537,
"acc_norm": 0.24074074074074073,
"acc_norm_stderr": 0.023788583551658537
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.22340425531914893,
"acc_stderr": 0.024847921358063962,
"acc_norm": 0.22340425531914893,
"acc_norm_stderr": 0.024847921358063962
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.24771838331160365,
"acc_stderr": 0.011025499291443737,
"acc_norm": 0.24771838331160365,
"acc_norm_stderr": 0.011025499291443737
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.40441176470588236,
"acc_stderr": 0.029812630701569743,
"acc_norm": 0.40441176470588236,
"acc_norm_stderr": 0.029812630701569743
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.26143790849673204,
"acc_stderr": 0.017776947157528027,
"acc_norm": 0.26143790849673204,
"acc_norm_stderr": 0.017776947157528027
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.20909090909090908,
"acc_stderr": 0.038950910157241364,
"acc_norm": 0.20909090909090908,
"acc_norm_stderr": 0.038950910157241364
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.23673469387755103,
"acc_stderr": 0.02721283588407316,
"acc_norm": 0.23673469387755103,
"acc_norm_stderr": 0.02721283588407316
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.23383084577114427,
"acc_stderr": 0.029929415408348398,
"acc_norm": 0.23383084577114427,
"acc_norm_stderr": 0.029929415408348398
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.23,
"acc_stderr": 0.04229525846816505,
"acc_norm": 0.23,
"acc_norm_stderr": 0.04229525846816505
},
"harness|hendrycksTest-virology|5": {
"acc": 0.2891566265060241,
"acc_stderr": 0.03529486801511115,
"acc_norm": 0.2891566265060241,
"acc_norm_stderr": 0.03529486801511115
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.21052631578947367,
"acc_stderr": 0.03126781714663179,
"acc_norm": 0.21052631578947367,
"acc_norm_stderr": 0.03126781714663179
},
"harness|truthfulqa:mc|0": {
"mc1": 0.24357405140758873,
"mc1_stderr": 0.015026354824910782,
"mc2": 0.4396548316438528,
"mc2_stderr": 0.015239790495271214
},
"harness|winogrande|5": {
"acc": 0.4925019731649566,
"acc_stderr": 0.01405090552122858
},
"harness|drop|3": {
"em": 0.0026216442953020135,
"em_stderr": 0.0005236685642965868,
"f1": 0.026351719798657795,
"f1_stderr": 0.0010276173957002664
},
"harness|gsm8k|5": {
"acc": 0.002274450341167551,
"acc_stderr": 0.0013121578148674329
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_BEE-spoke-data__smol_llama-81M-tied | [
"region:us"
]
| 2023-11-18T14:17:26+00:00 | {"pretty_name": "Evaluation run of BEE-spoke-data/smol_llama-81M-tied", "dataset_summary": "Dataset automatically created during the evaluation run of model [BEE-spoke-data/smol_llama-81M-tied](https://huggingface.co/BEE-spoke-data/smol_llama-81M-tied) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_BEE-spoke-data__smol_llama-81M-tied_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-18T14:15:30.569311](https://huggingface.co/datasets/open-llm-leaderboard/details_BEE-spoke-data__smol_llama-81M-tied_public/blob/main/results_2023-11-18T14-15-30.569311.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.2402171621298779,\n \"acc_stderr\": 0.030151316179768264,\n \"acc_norm\": 0.24121210499496637,\n \"acc_norm_stderr\": 0.030936061025851773,\n \"mc1\": 0.24357405140758873,\n \"mc1_stderr\": 0.015026354824910782,\n \"mc2\": 0.4396548316438528,\n \"mc2_stderr\": 0.015239790495271214,\n \"em\": 0.0026216442953020135,\n \"em_stderr\": 0.0005236685642965868,\n \"f1\": 0.026351719798657795,\n \"f1_stderr\": 0.0010276173957002664\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.16552901023890784,\n \"acc_stderr\": 0.010860860440277703,\n \"acc_norm\": 0.22184300341296928,\n \"acc_norm_stderr\": 0.012141659068147884\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.2765385381398128,\n \"acc_stderr\": 0.004463721071319102,\n \"acc_norm\": 0.2932682732523402,\n \"acc_norm_stderr\": 0.004543299338935421\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.22,\n \"acc_stderr\": 0.04163331998932268,\n \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.04163331998932268\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.2962962962962963,\n \"acc_stderr\": 0.03944624162501116,\n \"acc_norm\": 0.2962962962962963,\n \"acc_norm_stderr\": 0.03944624162501116\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.17763157894736842,\n \"acc_stderr\": 0.031103182383123398,\n \"acc_norm\": 0.17763157894736842,\n \"acc_norm_stderr\": 0.031103182383123398\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.19,\n \"acc_stderr\": 0.03942772444036623,\n \"acc_norm\": 0.19,\n \"acc_norm_stderr\": 0.03942772444036623\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.22641509433962265,\n \"acc_stderr\": 0.025757559893106744,\n \"acc_norm\": 0.22641509433962265,\n \"acc_norm_stderr\": 0.025757559893106744\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.22916666666666666,\n \"acc_stderr\": 0.03514697467862388,\n \"acc_norm\": 0.22916666666666666,\n \"acc_norm_stderr\": 0.03514697467862388\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.19,\n \"acc_stderr\": 0.039427724440366234,\n \"acc_norm\": 0.19,\n \"acc_norm_stderr\": 0.039427724440366234\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.23,\n \"acc_stderr\": 0.04229525846816505,\n \"acc_norm\": 0.23,\n \"acc_norm_stderr\": 0.04229525846816505\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.2,\n \"acc_stderr\": 0.04020151261036846,\n \"acc_norm\": 0.2,\n \"acc_norm_stderr\": 0.04020151261036846\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.2023121387283237,\n \"acc_stderr\": 0.03063114553919882,\n \"acc_norm\": 0.2023121387283237,\n \"acc_norm_stderr\": 0.03063114553919882\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.21568627450980393,\n \"acc_stderr\": 0.04092563958237654,\n \"acc_norm\": 0.21568627450980393,\n \"acc_norm_stderr\": 0.04092563958237654\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.22,\n \"acc_stderr\": 0.04163331998932269,\n \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.04163331998932269\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.25957446808510637,\n \"acc_stderr\": 0.02865917937429232,\n \"acc_norm\": 0.25957446808510637,\n \"acc_norm_stderr\": 0.02865917937429232\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2631578947368421,\n \"acc_stderr\": 0.04142439719489361,\n \"acc_norm\": 0.2631578947368421,\n \"acc_norm_stderr\": 0.04142439719489361\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.2413793103448276,\n \"acc_stderr\": 0.03565998174135302,\n \"acc_norm\": 0.2413793103448276,\n \"acc_norm_stderr\": 0.03565998174135302\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.22486772486772486,\n \"acc_stderr\": 0.02150209607822914,\n \"acc_norm\": 0.22486772486772486,\n \"acc_norm_stderr\": 0.02150209607822914\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.15873015873015872,\n \"acc_stderr\": 0.03268454013011743,\n \"acc_norm\": 0.15873015873015872,\n \"acc_norm_stderr\": 0.03268454013011743\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.19,\n \"acc_stderr\": 0.03942772444036624,\n \"acc_norm\": 0.19,\n \"acc_norm_stderr\": 0.03942772444036624\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.2967741935483871,\n \"acc_stderr\": 0.025988500792411898,\n \"acc_norm\": 0.2967741935483871,\n \"acc_norm_stderr\": 0.025988500792411898\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.29064039408866993,\n \"acc_stderr\": 0.0319474007226554,\n \"acc_norm\": 0.29064039408866993,\n \"acc_norm_stderr\": 0.0319474007226554\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.04461960433384741,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.04461960433384741\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.24242424242424243,\n \"acc_stderr\": 0.03346409881055953,\n \"acc_norm\": 0.24242424242424243,\n \"acc_norm_stderr\": 0.03346409881055953\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.2676767676767677,\n \"acc_stderr\": 0.03154449888270285,\n \"acc_norm\": 0.2676767676767677,\n \"acc_norm_stderr\": 0.03154449888270285\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.23834196891191708,\n \"acc_stderr\": 0.030748905363909902,\n \"acc_norm\": 0.23834196891191708,\n \"acc_norm_stderr\": 0.030748905363909902\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.2230769230769231,\n \"acc_stderr\": 0.02110773012724398,\n \"acc_norm\": 0.2230769230769231,\n \"acc_norm_stderr\": 0.02110773012724398\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.21851851851851853,\n \"acc_stderr\": 0.02519575225182379,\n \"acc_norm\": 0.21851851851851853,\n \"acc_norm_stderr\": 0.02519575225182379\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.21008403361344538,\n \"acc_stderr\": 0.026461398717471874,\n \"acc_norm\": 0.21008403361344538,\n \"acc_norm_stderr\": 0.026461398717471874\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.19205298013245034,\n \"acc_stderr\": 0.032162984205936135,\n \"acc_norm\": 0.19205298013245034,\n \"acc_norm_stderr\": 0.032162984205936135\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.20550458715596331,\n \"acc_stderr\": 0.017324352325016012,\n \"acc_norm\": 0.20550458715596331,\n \"acc_norm_stderr\": 0.017324352325016012\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.39814814814814814,\n \"acc_stderr\": 0.033384734032074016,\n \"acc_norm\": 0.39814814814814814,\n \"acc_norm_stderr\": 0.033384734032074016\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.29411764705882354,\n \"acc_stderr\": 0.03198001660115071,\n \"acc_norm\": 0.29411764705882354,\n \"acc_norm_stderr\": 0.03198001660115071\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.26582278481012656,\n \"acc_stderr\": 0.028756799629658342,\n \"acc_norm\": 0.26582278481012656,\n \"acc_norm_stderr\": 0.028756799629658342\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.24663677130044842,\n \"acc_stderr\": 0.02893041312091088,\n \"acc_norm\": 0.24663677130044842,\n \"acc_norm_stderr\": 0.02893041312091088\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.2595419847328244,\n \"acc_stderr\": 0.03844876139785271,\n \"acc_norm\": 0.2595419847328244,\n \"acc_norm_stderr\": 0.03844876139785271\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.2809917355371901,\n \"acc_stderr\": 0.041032038305145124,\n \"acc_norm\": 0.2809917355371901,\n \"acc_norm_stderr\": 0.041032038305145124\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.23148148148148148,\n \"acc_stderr\": 0.04077494709252626,\n \"acc_norm\": 0.23148148148148148,\n \"acc_norm_stderr\": 0.04077494709252626\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.26993865030674846,\n \"acc_stderr\": 0.034878251684978906,\n \"acc_norm\": 0.26993865030674846,\n \"acc_norm_stderr\": 0.034878251684978906\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.23214285714285715,\n \"acc_stderr\": 0.04007341809755806,\n \"acc_norm\": 0.23214285714285715,\n \"acc_norm_stderr\": 0.04007341809755806\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.17475728155339806,\n \"acc_stderr\": 0.037601780060266224,\n \"acc_norm\": 0.17475728155339806,\n \"acc_norm_stderr\": 0.037601780060266224\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.2222222222222222,\n \"acc_stderr\": 0.027236013946196694,\n \"acc_norm\": 0.2222222222222222,\n \"acc_norm_stderr\": 0.027236013946196694\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.3,\n \"acc_stderr\": 0.046056618647183814,\n \"acc_norm\": 0.3,\n \"acc_norm_stderr\": 0.046056618647183814\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.2503192848020434,\n \"acc_stderr\": 0.01549108895149459,\n \"acc_norm\": 0.2503192848020434,\n \"acc_norm_stderr\": 0.01549108895149459\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.24566473988439305,\n \"acc_stderr\": 0.02317629820399201,\n \"acc_norm\": 0.24566473988439305,\n \"acc_norm_stderr\": 0.02317629820399201\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2424581005586592,\n \"acc_stderr\": 0.014333522059217889,\n \"acc_norm\": 0.2424581005586592,\n \"acc_norm_stderr\": 0.014333522059217889\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.2222222222222222,\n \"acc_stderr\": 0.02380518652488815,\n \"acc_norm\": 0.2222222222222222,\n \"acc_norm_stderr\": 0.02380518652488815\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.20257234726688103,\n \"acc_stderr\": 0.02282731749105968,\n \"acc_norm\": 0.20257234726688103,\n \"acc_norm_stderr\": 0.02282731749105968\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.24074074074074073,\n \"acc_stderr\": 0.023788583551658537,\n \"acc_norm\": 0.24074074074074073,\n \"acc_norm_stderr\": 0.023788583551658537\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.22340425531914893,\n \"acc_stderr\": 0.024847921358063962,\n \"acc_norm\": 0.22340425531914893,\n \"acc_norm_stderr\": 0.024847921358063962\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.24771838331160365,\n \"acc_stderr\": 0.011025499291443737,\n \"acc_norm\": 0.24771838331160365,\n \"acc_norm_stderr\": 0.011025499291443737\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.40441176470588236,\n \"acc_stderr\": 0.029812630701569743,\n \"acc_norm\": 0.40441176470588236,\n \"acc_norm_stderr\": 0.029812630701569743\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.26143790849673204,\n \"acc_stderr\": 0.017776947157528027,\n \"acc_norm\": 0.26143790849673204,\n \"acc_norm_stderr\": 0.017776947157528027\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.20909090909090908,\n \"acc_stderr\": 0.038950910157241364,\n \"acc_norm\": 0.20909090909090908,\n \"acc_norm_stderr\": 0.038950910157241364\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.23673469387755103,\n \"acc_stderr\": 0.02721283588407316,\n \"acc_norm\": 0.23673469387755103,\n \"acc_norm_stderr\": 0.02721283588407316\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.23383084577114427,\n \"acc_stderr\": 0.029929415408348398,\n \"acc_norm\": 0.23383084577114427,\n \"acc_norm_stderr\": 0.029929415408348398\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.23,\n \"acc_stderr\": 0.04229525846816505,\n \"acc_norm\": 0.23,\n \"acc_norm_stderr\": 0.04229525846816505\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.2891566265060241,\n \"acc_stderr\": 0.03529486801511115,\n \"acc_norm\": 0.2891566265060241,\n \"acc_norm_stderr\": 0.03529486801511115\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.21052631578947367,\n \"acc_stderr\": 0.03126781714663179,\n \"acc_norm\": 0.21052631578947367,\n \"acc_norm_stderr\": 0.03126781714663179\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.24357405140758873,\n \"mc1_stderr\": 0.015026354824910782,\n \"mc2\": 0.4396548316438528,\n \"mc2_stderr\": 0.015239790495271214\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.4925019731649566,\n \"acc_stderr\": 0.01405090552122858\n },\n \"harness|drop|3\": {\n \"em\": 0.0026216442953020135,\n \"em_stderr\": 0.0005236685642965868,\n \"f1\": 0.026351719798657795,\n \"f1_stderr\": 0.0010276173957002664\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.002274450341167551,\n \"acc_stderr\": 0.0013121578148674329\n }\n}\n```", "repo_url": "https://huggingface.co/BEE-spoke-data/smol_llama-81M-tied", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|arc:challenge|25_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|drop|3_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|gsm8k|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hellaswag|10_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-18T14-15-30.569311.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["**/details_harness|winogrande|5_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-18T14-15-30.569311.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_18T14_15_30.569311", "path": ["results_2023-11-18T14-15-30.569311.parquet"]}, {"split": "latest", "path": ["results_2023-11-18T14-15-30.569311.parquet"]}]}]} | 2023-11-18T14:18:13+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Evaluation run of BEE-spoke-data/smol_llama-81M-tied
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model BEE-spoke-data/smol_llama-81M-tied on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-18T14:15:30.569311(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of BEE-spoke-data/smol_llama-81M-tied",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model BEE-spoke-data/smol_llama-81M-tied on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-18T14:15:30.569311(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of BEE-spoke-data/smol_llama-81M-tied",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model BEE-spoke-data/smol_llama-81M-tied on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-18T14:15:30.569311(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
6,
28,
31,
177,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of BEE-spoke-data/smol_llama-81M-tied## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model BEE-spoke-data/smol_llama-81M-tied on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-18T14:15:30.569311(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
]
|
fd51b82097f510f8ebb8debb08f021e640c5f352 | The given dataset is a clone of the [drone dataset](https://www.kaggle.com/datasets/dasmehdixtr/drone-dataset-uav) on Kaggle.
It is created by [Mehdi Özel](https://www.researchgate.net/profile/Mehdi-Oezel). | Leofierus/Drone-Dataset | [
"license:mit",
"region:us"
]
| 2023-11-18T15:09:01+00:00 | {"license": "mit"} | 2023-11-18T15:16:29+00:00 | []
| []
| TAGS
#license-mit #region-us
| The given dataset is a clone of the drone dataset on Kaggle.
It is created by Mehdi Özel. | []
| [
"TAGS\n#license-mit #region-us \n"
]
| [
11
]
| [
"passage: TAGS\n#license-mit #region-us \n"
]
|
b7725c8bae4122fa493fd852f157cfa060a1cf3e |
# Dataset Card for Evaluation run of Intel/neural-chat-7b-v3-1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Intel/neural-chat-7b-v3-1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Intel__neural-chat-7b-v3-1_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-18T15:42:45.444313](https://huggingface.co/datasets/open-llm-leaderboard/details_Intel__neural-chat-7b-v3-1_public/blob/main/results_2023-11-18T15-42-45.444313.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6203975476749912,
"acc_stderr": 0.03253317374017875,
"acc_norm": 0.6286844485803,
"acc_norm_stderr": 0.03323093034337969,
"mc1": 0.44063647490820074,
"mc1_stderr": 0.01737969755543745,
"mc2": 0.5953808732777186,
"mc2_stderr": 0.015347393503467649,
"em": 0.3183724832214765,
"em_stderr": 0.004770687516057205,
"f1": 0.44000419463087526,
"f1_stderr": 0.00452137107601273
},
"harness|arc:challenge|25": {
"acc": 0.6322525597269625,
"acc_stderr": 0.01409099561816848,
"acc_norm": 0.6629692832764505,
"acc_norm_stderr": 0.013813476652902276
},
"harness|hellaswag|10": {
"acc": 0.6446922923720374,
"acc_stderr": 0.0047762832034680975,
"acc_norm": 0.8359888468432584,
"acc_norm_stderr": 0.003695289340514483
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6148148148148148,
"acc_stderr": 0.04203921040156279,
"acc_norm": 0.6148148148148148,
"acc_norm_stderr": 0.04203921040156279
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6842105263157895,
"acc_stderr": 0.0378272898086547,
"acc_norm": 0.6842105263157895,
"acc_norm_stderr": 0.0378272898086547
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620332,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620332
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6716981132075471,
"acc_stderr": 0.02890159361241178,
"acc_norm": 0.6716981132075471,
"acc_norm_stderr": 0.02890159361241178
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.037455547914624555,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.037455547914624555
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.44,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.44,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.35,
"acc_stderr": 0.04793724854411019,
"acc_norm": 0.35,
"acc_norm_stderr": 0.04793724854411019
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.630057803468208,
"acc_stderr": 0.0368122963339432,
"acc_norm": 0.630057803468208,
"acc_norm_stderr": 0.0368122963339432
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.39215686274509803,
"acc_stderr": 0.04858083574266345,
"acc_norm": 0.39215686274509803,
"acc_norm_stderr": 0.04858083574266345
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.74,
"acc_stderr": 0.044084400227680794,
"acc_norm": 0.74,
"acc_norm_stderr": 0.044084400227680794
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5148936170212766,
"acc_stderr": 0.032671518489247764,
"acc_norm": 0.5148936170212766,
"acc_norm_stderr": 0.032671518489247764
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4649122807017544,
"acc_stderr": 0.046920083813689104,
"acc_norm": 0.4649122807017544,
"acc_norm_stderr": 0.046920083813689104
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5241379310344828,
"acc_stderr": 0.0416180850350153,
"acc_norm": 0.5241379310344828,
"acc_norm_stderr": 0.0416180850350153
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3862433862433862,
"acc_stderr": 0.025075981767601684,
"acc_norm": 0.3862433862433862,
"acc_norm_stderr": 0.025075981767601684
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.0442626668137991,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.0442626668137991
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7709677419354839,
"acc_stderr": 0.023904914311782658,
"acc_norm": 0.7709677419354839,
"acc_norm_stderr": 0.023904914311782658
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5221674876847291,
"acc_stderr": 0.03514528562175008,
"acc_norm": 0.5221674876847291,
"acc_norm_stderr": 0.03514528562175008
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.68,
"acc_stderr": 0.04688261722621505,
"acc_norm": 0.68,
"acc_norm_stderr": 0.04688261722621505
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7636363636363637,
"acc_stderr": 0.03317505930009182,
"acc_norm": 0.7636363636363637,
"acc_norm_stderr": 0.03317505930009182
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7575757575757576,
"acc_stderr": 0.03053289223393202,
"acc_norm": 0.7575757575757576,
"acc_norm_stderr": 0.03053289223393202
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9015544041450777,
"acc_stderr": 0.021500249576033446,
"acc_norm": 0.9015544041450777,
"acc_norm_stderr": 0.021500249576033446
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6051282051282051,
"acc_stderr": 0.024784316942156395,
"acc_norm": 0.6051282051282051,
"acc_norm_stderr": 0.024784316942156395
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3333333333333333,
"acc_stderr": 0.028742040903948485,
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.028742040903948485
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.030388353551886793,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.030388353551886793
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.36423841059602646,
"acc_stderr": 0.03929111781242741,
"acc_norm": 0.36423841059602646,
"acc_norm_stderr": 0.03929111781242741
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8366972477064221,
"acc_stderr": 0.015848255806501562,
"acc_norm": 0.8366972477064221,
"acc_norm_stderr": 0.015848255806501562
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4861111111111111,
"acc_stderr": 0.03408655867977749,
"acc_norm": 0.4861111111111111,
"acc_norm_stderr": 0.03408655867977749
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.803921568627451,
"acc_stderr": 0.027865942286639325,
"acc_norm": 0.803921568627451,
"acc_norm_stderr": 0.027865942286639325
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7974683544303798,
"acc_stderr": 0.026160568246601446,
"acc_norm": 0.7974683544303798,
"acc_norm_stderr": 0.026160568246601446
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6816143497757847,
"acc_stderr": 0.03126580522513713,
"acc_norm": 0.6816143497757847,
"acc_norm_stderr": 0.03126580522513713
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.732824427480916,
"acc_stderr": 0.038808483010823944,
"acc_norm": 0.732824427480916,
"acc_norm_stderr": 0.038808483010823944
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7851239669421488,
"acc_stderr": 0.037494924487096966,
"acc_norm": 0.7851239669421488,
"acc_norm_stderr": 0.037494924487096966
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7592592592592593,
"acc_stderr": 0.04133119440243839,
"acc_norm": 0.7592592592592593,
"acc_norm_stderr": 0.04133119440243839
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7177914110429447,
"acc_stderr": 0.03536117886664742,
"acc_norm": 0.7177914110429447,
"acc_norm_stderr": 0.03536117886664742
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.48214285714285715,
"acc_stderr": 0.047427623612430116,
"acc_norm": 0.48214285714285715,
"acc_norm_stderr": 0.047427623612430116
},
"harness|hendrycksTest-management|5": {
"acc": 0.8252427184466019,
"acc_stderr": 0.037601780060266196,
"acc_norm": 0.8252427184466019,
"acc_norm_stderr": 0.037601780060266196
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8632478632478633,
"acc_stderr": 0.022509033937077805,
"acc_norm": 0.8632478632478633,
"acc_norm_stderr": 0.022509033937077805
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8122605363984674,
"acc_stderr": 0.01396439376989914,
"acc_norm": 0.8122605363984674,
"acc_norm_stderr": 0.01396439376989914
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6791907514450867,
"acc_stderr": 0.025131000233647893,
"acc_norm": 0.6791907514450867,
"acc_norm_stderr": 0.025131000233647893
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.38100558659217876,
"acc_stderr": 0.016242028834053627,
"acc_norm": 0.38100558659217876,
"acc_norm_stderr": 0.016242028834053627
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.696078431372549,
"acc_stderr": 0.026336613469046626,
"acc_norm": 0.696078431372549,
"acc_norm_stderr": 0.026336613469046626
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6752411575562701,
"acc_stderr": 0.026596782287697043,
"acc_norm": 0.6752411575562701,
"acc_norm_stderr": 0.026596782287697043
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7191358024691358,
"acc_stderr": 0.02500646975579921,
"acc_norm": 0.7191358024691358,
"acc_norm_stderr": 0.02500646975579921
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4432624113475177,
"acc_stderr": 0.029634838473766006,
"acc_norm": 0.4432624113475177,
"acc_norm_stderr": 0.029634838473766006
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4439374185136897,
"acc_stderr": 0.012689708167787684,
"acc_norm": 0.4439374185136897,
"acc_norm_stderr": 0.012689708167787684
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6544117647058824,
"acc_stderr": 0.02888819310398863,
"acc_norm": 0.6544117647058824,
"acc_norm_stderr": 0.02888819310398863
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6421568627450981,
"acc_stderr": 0.019393058402355435,
"acc_norm": 0.6421568627450981,
"acc_norm_stderr": 0.019393058402355435
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6545454545454545,
"acc_stderr": 0.04554619617541054,
"acc_norm": 0.6545454545454545,
"acc_norm_stderr": 0.04554619617541054
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.710204081632653,
"acc_stderr": 0.029043088683304328,
"acc_norm": 0.710204081632653,
"acc_norm_stderr": 0.029043088683304328
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8407960199004975,
"acc_stderr": 0.02587064676616913,
"acc_norm": 0.8407960199004975,
"acc_norm_stderr": 0.02587064676616913
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.82,
"acc_stderr": 0.038612291966536934,
"acc_norm": 0.82,
"acc_norm_stderr": 0.038612291966536934
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5120481927710844,
"acc_stderr": 0.03891364495835817,
"acc_norm": 0.5120481927710844,
"acc_norm_stderr": 0.03891364495835817
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8187134502923976,
"acc_stderr": 0.029547741687640038,
"acc_norm": 0.8187134502923976,
"acc_norm_stderr": 0.029547741687640038
},
"harness|truthfulqa:mc|0": {
"mc1": 0.44063647490820074,
"mc1_stderr": 0.01737969755543745,
"mc2": 0.5953808732777186,
"mc2_stderr": 0.015347393503467649
},
"harness|winogrande|5": {
"acc": 0.7797947908445146,
"acc_stderr": 0.011646276755089691
},
"harness|drop|3": {
"em": 0.3183724832214765,
"em_stderr": 0.004770687516057205,
"f1": 0.44000419463087526,
"f1_stderr": 0.00452137107601273
},
"harness|gsm8k|5": {
"acc": 0.19408642911296436,
"acc_stderr": 0.01089391830819241
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_Intel__neural-chat-7b-v3-1 | [
"region:us"
]
| 2023-11-18T15:22:20+00:00 | {"pretty_name": "Evaluation run of Intel/neural-chat-7b-v3-1", "dataset_summary": "Dataset automatically created during the evaluation run of model [Intel/neural-chat-7b-v3-1](https://huggingface.co/Intel/neural-chat-7b-v3-1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Intel__neural-chat-7b-v3-1_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-18T15:42:45.444313](https://huggingface.co/datasets/open-llm-leaderboard/details_Intel__neural-chat-7b-v3-1_public/blob/main/results_2023-11-18T15-42-45.444313.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6203975476749912,\n \"acc_stderr\": 0.03253317374017875,\n \"acc_norm\": 0.6286844485803,\n \"acc_norm_stderr\": 0.03323093034337969,\n \"mc1\": 0.44063647490820074,\n \"mc1_stderr\": 0.01737969755543745,\n \"mc2\": 0.5953808732777186,\n \"mc2_stderr\": 0.015347393503467649,\n \"em\": 0.3183724832214765,\n \"em_stderr\": 0.004770687516057205,\n \"f1\": 0.44000419463087526,\n \"f1_stderr\": 0.00452137107601273\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6322525597269625,\n \"acc_stderr\": 0.01409099561816848,\n \"acc_norm\": 0.6629692832764505,\n \"acc_norm_stderr\": 0.013813476652902276\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6446922923720374,\n \"acc_stderr\": 0.0047762832034680975,\n \"acc_norm\": 0.8359888468432584,\n \"acc_norm_stderr\": 0.003695289340514483\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6148148148148148,\n \"acc_stderr\": 0.04203921040156279,\n \"acc_norm\": 0.6148148148148148,\n \"acc_norm_stderr\": 0.04203921040156279\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6842105263157895,\n \"acc_stderr\": 0.0378272898086547,\n \"acc_norm\": 0.6842105263157895,\n \"acc_norm_stderr\": 0.0378272898086547\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620332,\n \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620332\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6716981132075471,\n \"acc_stderr\": 0.02890159361241178,\n \"acc_norm\": 0.6716981132075471,\n \"acc_norm_stderr\": 0.02890159361241178\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7222222222222222,\n \"acc_stderr\": 0.037455547914624555,\n \"acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.037455547914624555\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.04793724854411019,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.04793724854411019\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.630057803468208,\n \"acc_stderr\": 0.0368122963339432,\n \"acc_norm\": 0.630057803468208,\n \"acc_norm_stderr\": 0.0368122963339432\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.39215686274509803,\n \"acc_stderr\": 0.04858083574266345,\n \"acc_norm\": 0.39215686274509803,\n \"acc_norm_stderr\": 0.04858083574266345\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.044084400227680794,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.044084400227680794\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5148936170212766,\n \"acc_stderr\": 0.032671518489247764,\n \"acc_norm\": 0.5148936170212766,\n \"acc_norm_stderr\": 0.032671518489247764\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4649122807017544,\n \"acc_stderr\": 0.046920083813689104,\n \"acc_norm\": 0.4649122807017544,\n \"acc_norm_stderr\": 0.046920083813689104\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5241379310344828,\n \"acc_stderr\": 0.0416180850350153,\n \"acc_norm\": 0.5241379310344828,\n \"acc_norm_stderr\": 0.0416180850350153\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3862433862433862,\n \"acc_stderr\": 0.025075981767601684,\n \"acc_norm\": 0.3862433862433862,\n \"acc_norm_stderr\": 0.025075981767601684\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.42857142857142855,\n \"acc_stderr\": 0.0442626668137991,\n \"acc_norm\": 0.42857142857142855,\n \"acc_norm_stderr\": 0.0442626668137991\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7709677419354839,\n \"acc_stderr\": 0.023904914311782658,\n \"acc_norm\": 0.7709677419354839,\n \"acc_norm_stderr\": 0.023904914311782658\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5221674876847291,\n \"acc_stderr\": 0.03514528562175008,\n \"acc_norm\": 0.5221674876847291,\n \"acc_norm_stderr\": 0.03514528562175008\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621505,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.04688261722621505\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7636363636363637,\n \"acc_stderr\": 0.03317505930009182,\n \"acc_norm\": 0.7636363636363637,\n \"acc_norm_stderr\": 0.03317505930009182\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7575757575757576,\n \"acc_stderr\": 0.03053289223393202,\n \"acc_norm\": 0.7575757575757576,\n \"acc_norm_stderr\": 0.03053289223393202\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9015544041450777,\n \"acc_stderr\": 0.021500249576033446,\n \"acc_norm\": 0.9015544041450777,\n \"acc_norm_stderr\": 0.021500249576033446\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6051282051282051,\n \"acc_stderr\": 0.024784316942156395,\n \"acc_norm\": 0.6051282051282051,\n \"acc_norm_stderr\": 0.024784316942156395\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3333333333333333,\n \"acc_stderr\": 0.028742040903948485,\n \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.028742040903948485\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.030388353551886793,\n \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.030388353551886793\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.36423841059602646,\n \"acc_stderr\": 0.03929111781242741,\n \"acc_norm\": 0.36423841059602646,\n \"acc_norm_stderr\": 0.03929111781242741\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8366972477064221,\n \"acc_stderr\": 0.015848255806501562,\n \"acc_norm\": 0.8366972477064221,\n \"acc_norm_stderr\": 0.015848255806501562\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4861111111111111,\n \"acc_stderr\": 0.03408655867977749,\n \"acc_norm\": 0.4861111111111111,\n \"acc_norm_stderr\": 0.03408655867977749\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.803921568627451,\n \"acc_stderr\": 0.027865942286639325,\n \"acc_norm\": 0.803921568627451,\n \"acc_norm_stderr\": 0.027865942286639325\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7974683544303798,\n \"acc_stderr\": 0.026160568246601446,\n \"acc_norm\": 0.7974683544303798,\n \"acc_norm_stderr\": 0.026160568246601446\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6816143497757847,\n \"acc_stderr\": 0.03126580522513713,\n \"acc_norm\": 0.6816143497757847,\n \"acc_norm_stderr\": 0.03126580522513713\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.732824427480916,\n \"acc_stderr\": 0.038808483010823944,\n \"acc_norm\": 0.732824427480916,\n \"acc_norm_stderr\": 0.038808483010823944\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7851239669421488,\n \"acc_stderr\": 0.037494924487096966,\n \"acc_norm\": 0.7851239669421488,\n \"acc_norm_stderr\": 0.037494924487096966\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7592592592592593,\n \"acc_stderr\": 0.04133119440243839,\n \"acc_norm\": 0.7592592592592593,\n \"acc_norm_stderr\": 0.04133119440243839\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7177914110429447,\n \"acc_stderr\": 0.03536117886664742,\n \"acc_norm\": 0.7177914110429447,\n \"acc_norm_stderr\": 0.03536117886664742\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.48214285714285715,\n \"acc_stderr\": 0.047427623612430116,\n \"acc_norm\": 0.48214285714285715,\n \"acc_norm_stderr\": 0.047427623612430116\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8252427184466019,\n \"acc_stderr\": 0.037601780060266196,\n \"acc_norm\": 0.8252427184466019,\n \"acc_norm_stderr\": 0.037601780060266196\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8632478632478633,\n \"acc_stderr\": 0.022509033937077805,\n \"acc_norm\": 0.8632478632478633,\n \"acc_norm_stderr\": 0.022509033937077805\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8122605363984674,\n \"acc_stderr\": 0.01396439376989914,\n \"acc_norm\": 0.8122605363984674,\n \"acc_norm_stderr\": 0.01396439376989914\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6791907514450867,\n \"acc_stderr\": 0.025131000233647893,\n \"acc_norm\": 0.6791907514450867,\n \"acc_norm_stderr\": 0.025131000233647893\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.38100558659217876,\n \"acc_stderr\": 0.016242028834053627,\n \"acc_norm\": 0.38100558659217876,\n \"acc_norm_stderr\": 0.016242028834053627\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.696078431372549,\n \"acc_stderr\": 0.026336613469046626,\n \"acc_norm\": 0.696078431372549,\n \"acc_norm_stderr\": 0.026336613469046626\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6752411575562701,\n \"acc_stderr\": 0.026596782287697043,\n \"acc_norm\": 0.6752411575562701,\n \"acc_norm_stderr\": 0.026596782287697043\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7191358024691358,\n \"acc_stderr\": 0.02500646975579921,\n \"acc_norm\": 0.7191358024691358,\n \"acc_norm_stderr\": 0.02500646975579921\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4432624113475177,\n \"acc_stderr\": 0.029634838473766006,\n \"acc_norm\": 0.4432624113475177,\n \"acc_norm_stderr\": 0.029634838473766006\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4439374185136897,\n \"acc_stderr\": 0.012689708167787684,\n \"acc_norm\": 0.4439374185136897,\n \"acc_norm_stderr\": 0.012689708167787684\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6544117647058824,\n \"acc_stderr\": 0.02888819310398863,\n \"acc_norm\": 0.6544117647058824,\n \"acc_norm_stderr\": 0.02888819310398863\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6421568627450981,\n \"acc_stderr\": 0.019393058402355435,\n \"acc_norm\": 0.6421568627450981,\n \"acc_norm_stderr\": 0.019393058402355435\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6545454545454545,\n \"acc_stderr\": 0.04554619617541054,\n \"acc_norm\": 0.6545454545454545,\n \"acc_norm_stderr\": 0.04554619617541054\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.710204081632653,\n \"acc_stderr\": 0.029043088683304328,\n \"acc_norm\": 0.710204081632653,\n \"acc_norm_stderr\": 0.029043088683304328\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8407960199004975,\n \"acc_stderr\": 0.02587064676616913,\n \"acc_norm\": 0.8407960199004975,\n \"acc_norm_stderr\": 0.02587064676616913\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.82,\n \"acc_stderr\": 0.038612291966536934,\n \"acc_norm\": 0.82,\n \"acc_norm_stderr\": 0.038612291966536934\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5120481927710844,\n \"acc_stderr\": 0.03891364495835817,\n \"acc_norm\": 0.5120481927710844,\n \"acc_norm_stderr\": 0.03891364495835817\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8187134502923976,\n \"acc_stderr\": 0.029547741687640038,\n \"acc_norm\": 0.8187134502923976,\n \"acc_norm_stderr\": 0.029547741687640038\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.44063647490820074,\n \"mc1_stderr\": 0.01737969755543745,\n \"mc2\": 0.5953808732777186,\n \"mc2_stderr\": 0.015347393503467649\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7797947908445146,\n \"acc_stderr\": 0.011646276755089691\n },\n \"harness|drop|3\": {\n \"em\": 0.3183724832214765,\n \"em_stderr\": 0.004770687516057205,\n \"f1\": 0.44000419463087526,\n \"f1_stderr\": 0.00452137107601273\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.19408642911296436,\n \"acc_stderr\": 0.01089391830819241\n }\n}\n```", "repo_url": "https://huggingface.co/Intel/neural-chat-7b-v3-1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|arc:challenge|25_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|arc:challenge|25_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|arc:challenge|25_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|drop|3_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|drop|3_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|drop|3_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|gsm8k|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|gsm8k|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|gsm8k|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hellaswag|10_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hellaswag|10_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hellaswag|10_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-18T15-19-14.739909.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-18T15-23-13.598780.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-18T15-42-45.444313.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["**/details_harness|winogrande|5_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["**/details_harness|winogrande|5_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["**/details_harness|winogrande|5_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-18T15-42-45.444313.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_18T15_19_14.739909", "path": ["results_2023-11-18T15-19-14.739909.parquet"]}, {"split": "2023_11_18T15_23_13.598780", "path": ["results_2023-11-18T15-23-13.598780.parquet"]}, {"split": "2023_11_18T15_42_45.444313", "path": ["results_2023-11-18T15-42-45.444313.parquet"]}, {"split": "latest", "path": ["results_2023-11-18T15-42-45.444313.parquet"]}]}]} | 2023-11-18T15:46:34+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Evaluation run of Intel/neural-chat-7b-v3-1
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Intel/neural-chat-7b-v3-1 on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-18T15:42:45.444313(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of Intel/neural-chat-7b-v3-1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Intel/neural-chat-7b-v3-1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-18T15:42:45.444313(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Intel/neural-chat-7b-v3-1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Intel/neural-chat-7b-v3-1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-18T15:42:45.444313(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
6,
20,
31,
169,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Intel/neural-chat-7b-v3-1## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Intel/neural-chat-7b-v3-1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-18T15:42:45.444313(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
]
|
374bc0844fd83e9179a58590fbf2e0ef5ad2cd82 |
# Dataset Card for Evaluation run of rinna/bilingual-gpt-neox-4b-instruction-sft
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/rinna/bilingual-gpt-neox-4b-instruction-sft
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [rinna/bilingual-gpt-neox-4b-instruction-sft](https://huggingface.co/rinna/bilingual-gpt-neox-4b-instruction-sft) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_rinna__bilingual-gpt-neox-4b-instruction-sft",
"harness_gsm8k_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-02T14:56:57.726732](https://huggingface.co/datasets/open-llm-leaderboard/details_rinna__bilingual-gpt-neox-4b-instruction-sft/blob/main/results_2023-12-02T14-56-57.726732.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_rinna__bilingual-gpt-neox-4b-instruction-sft | [
"region:us"
]
| 2023-11-18T15:27:51+00:00 | {"pretty_name": "Evaluation run of rinna/bilingual-gpt-neox-4b-instruction-sft", "dataset_summary": "Dataset automatically created during the evaluation run of model [rinna/bilingual-gpt-neox-4b-instruction-sft](https://huggingface.co/rinna/bilingual-gpt-neox-4b-instruction-sft) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_rinna__bilingual-gpt-neox-4b-instruction-sft\",\n\t\"harness_gsm8k_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-02T14:56:57.726732](https://huggingface.co/datasets/open-llm-leaderboard/details_rinna__bilingual-gpt-neox-4b-instruction-sft/blob/main/results_2023-12-02T14-56-57.726732.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n }\n}\n```", "repo_url": "https://huggingface.co/rinna/bilingual-gpt-neox-4b-instruction-sft", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|arc:challenge|25_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|drop|3_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|gsm8k|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "2023_12_02T14_56_57.726732", "path": ["**/details_harness|gsm8k|5_2023-12-02T14-56-57.726732.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-02T14-56-57.726732.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hellaswag|10_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-18T15-24-39.768473.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["**/details_harness|winogrande|5_2023-11-18T15-24-39.768473.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-18T15-24-39.768473.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_18T15_24_39.768473", "path": ["results_2023-11-18T15-24-39.768473.parquet"]}, {"split": "2023_12_02T14_56_57.726732", "path": ["results_2023-12-02T14-56-57.726732.parquet"]}, {"split": "latest", "path": ["results_2023-12-02T14-56-57.726732.parquet"]}]}]} | 2023-12-02T14:57:05+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Evaluation run of rinna/bilingual-gpt-neox-4b-instruction-sft
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model rinna/bilingual-gpt-neox-4b-instruction-sft on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-02T14:56:57.726732(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of rinna/bilingual-gpt-neox-4b-instruction-sft",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model rinna/bilingual-gpt-neox-4b-instruction-sft on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T14:56:57.726732(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of rinna/bilingual-gpt-neox-4b-instruction-sft",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model rinna/bilingual-gpt-neox-4b-instruction-sft on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-02T14:56:57.726732(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
6,
27,
31,
176,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of rinna/bilingual-gpt-neox-4b-instruction-sft## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model rinna/bilingual-gpt-neox-4b-instruction-sft on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-02T14:56:57.726732(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
]
|
d6dc8f01f0af46544e7a54545666fe9e690ce501 | The "medibot_chat.csv" file contains data specifically designed for training the LlaMA2 chat model. The dataset is structured as follows:
<s>[INST]{user_query 1}[/INST]{chatbot_response 1}[INST]{user_query 2}[/INST]{chatbot_response 2}....[INST]{user_query n}[/INST]{chatbot response n}</s>
Please note that this dataset was generated with the assistance of ChatGPT 3.5 and may not adhere to medical standards. It is crucial not to integrate this model into any real-life medical applications. For such applications, it is recommended to create a more accurate and verified dataset. The current dataset is intended solely for the purpose of training the LlaMA2 chat model and evaluating the effectiveness of fine-tuning. | maxspin/medibot_dataset | [
"license:mit",
"region:us"
]
| 2023-11-18T16:10:09+00:00 | {"license": "mit"} | 2023-11-18T16:24:18+00:00 | []
| []
| TAGS
#license-mit #region-us
| The "medibot_chat.csv" file contains data specifically designed for training the LlaMA2 chat model. The dataset is structured as follows:
<s>[INST]{user_query 1}[/INST]{chatbot_response 1}[INST]{user_query 2}[/INST]{chatbot_response 2}....[INST]{user_query n}[/INST]{chatbot response n}</s>
Please note that this dataset was generated with the assistance of ChatGPT 3.5 and may not adhere to medical standards. It is crucial not to integrate this model into any real-life medical applications. For such applications, it is recommended to create a more accurate and verified dataset. The current dataset is intended solely for the purpose of training the LlaMA2 chat model and evaluating the effectiveness of fine-tuning. | []
| [
"TAGS\n#license-mit #region-us \n"
]
| [
11
]
| [
"passage: TAGS\n#license-mit #region-us \n"
]
|
38c49b9ecbf75245d0c51bdc6c5d5bae9f8ff1c9 |
# Dataset Card for Evaluation run of openchat/openchat_3.5
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/openchat/openchat_3.5
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [openchat/openchat_3.5](https://huggingface.co/openchat/openchat_3.5) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_openchat__openchat_3.5_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-19T10:30:18.054013](https://huggingface.co/datasets/open-llm-leaderboard/details_openchat__openchat_3.5_public/blob/main/results_2023-11-19T10-30-18.054013.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6253361427748827,
"acc_stderr": 0.03243199538325514,
"acc_norm": 0.6324168865850391,
"acc_norm_stderr": 0.033117338974973515,
"mc1": 0.3023255813953488,
"mc1_stderr": 0.016077509266133036,
"mc2": 0.4543017595862846,
"mc2_stderr": 0.015109332514210328,
"em": 0.0026216442953020135,
"em_stderr": 0.0005236685642965895,
"f1": 0.0692680369127516,
"f1_stderr": 0.0014684205896877763
},
"harness|arc:challenge|25": {
"acc": 0.5836177474402731,
"acc_stderr": 0.014405618279436174,
"acc_norm": 0.6245733788395904,
"acc_norm_stderr": 0.014150631435111728
},
"harness|hellaswag|10": {
"acc": 0.6450906193985262,
"acc_stderr": 0.0047750796365670966,
"acc_norm": 0.839573790081657,
"acc_norm_stderr": 0.003662508272330902
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.36,
"acc_stderr": 0.04824181513244218,
"acc_norm": 0.36,
"acc_norm_stderr": 0.04824181513244218
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5851851851851851,
"acc_stderr": 0.04256193767901408,
"acc_norm": 0.5851851851851851,
"acc_norm_stderr": 0.04256193767901408
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6776315789473685,
"acc_stderr": 0.03803510248351585,
"acc_norm": 0.6776315789473685,
"acc_norm_stderr": 0.03803510248351585
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.55,
"acc_stderr": 0.05,
"acc_norm": 0.55,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.690566037735849,
"acc_stderr": 0.02845015479411864,
"acc_norm": 0.690566037735849,
"acc_norm_stderr": 0.02845015479411864
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.03745554791462455,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.03745554791462455
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.47,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.47,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.47,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.47,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.32,
"acc_stderr": 0.046882617226215034,
"acc_norm": 0.32,
"acc_norm_stderr": 0.046882617226215034
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6473988439306358,
"acc_stderr": 0.036430371689585475,
"acc_norm": 0.6473988439306358,
"acc_norm_stderr": 0.036430371689585475
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4117647058823529,
"acc_stderr": 0.04897104952726366,
"acc_norm": 0.4117647058823529,
"acc_norm_stderr": 0.04897104952726366
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5276595744680851,
"acc_stderr": 0.03263597118409769,
"acc_norm": 0.5276595744680851,
"acc_norm_stderr": 0.03263597118409769
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4473684210526316,
"acc_stderr": 0.04677473004491199,
"acc_norm": 0.4473684210526316,
"acc_norm_stderr": 0.04677473004491199
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5448275862068965,
"acc_stderr": 0.04149886942192117,
"acc_norm": 0.5448275862068965,
"acc_norm_stderr": 0.04149886942192117
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4021164021164021,
"acc_stderr": 0.02525303255499769,
"acc_norm": 0.4021164021164021,
"acc_norm_stderr": 0.02525303255499769
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.5,
"acc_stderr": 0.04472135954999579,
"acc_norm": 0.5,
"acc_norm_stderr": 0.04472135954999579
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.31,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.31,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7483870967741936,
"acc_stderr": 0.024685979286239963,
"acc_norm": 0.7483870967741936,
"acc_norm_stderr": 0.024685979286239963
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.49261083743842365,
"acc_stderr": 0.03517603540361008,
"acc_norm": 0.49261083743842365,
"acc_norm_stderr": 0.03517603540361008
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.68,
"acc_stderr": 0.04688261722621505,
"acc_norm": 0.68,
"acc_norm_stderr": 0.04688261722621505
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7818181818181819,
"acc_stderr": 0.032250781083062896,
"acc_norm": 0.7818181818181819,
"acc_norm_stderr": 0.032250781083062896
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7676767676767676,
"acc_stderr": 0.03008862949021749,
"acc_norm": 0.7676767676767676,
"acc_norm_stderr": 0.03008862949021749
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8652849740932642,
"acc_stderr": 0.024639789097709447,
"acc_norm": 0.8652849740932642,
"acc_norm_stderr": 0.024639789097709447
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6307692307692307,
"acc_stderr": 0.02446861524147893,
"acc_norm": 0.6307692307692307,
"acc_norm_stderr": 0.02446861524147893
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.37777777777777777,
"acc_stderr": 0.02956070739246572,
"acc_norm": 0.37777777777777777,
"acc_norm_stderr": 0.02956070739246572
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6512605042016807,
"acc_stderr": 0.030956636328566545,
"acc_norm": 0.6512605042016807,
"acc_norm_stderr": 0.030956636328566545
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.32450331125827814,
"acc_stderr": 0.038227469376587525,
"acc_norm": 0.32450331125827814,
"acc_norm_stderr": 0.038227469376587525
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8477064220183487,
"acc_stderr": 0.015405084393157074,
"acc_norm": 0.8477064220183487,
"acc_norm_stderr": 0.015405084393157074
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.48148148148148145,
"acc_stderr": 0.034076320938540516,
"acc_norm": 0.48148148148148145,
"acc_norm_stderr": 0.034076320938540516
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7843137254901961,
"acc_stderr": 0.028867431449849316,
"acc_norm": 0.7843137254901961,
"acc_norm_stderr": 0.028867431449849316
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8185654008438819,
"acc_stderr": 0.025085961144579647,
"acc_norm": 0.8185654008438819,
"acc_norm_stderr": 0.025085961144579647
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6905829596412556,
"acc_stderr": 0.031024411740572206,
"acc_norm": 0.6905829596412556,
"acc_norm_stderr": 0.031024411740572206
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7480916030534351,
"acc_stderr": 0.03807387116306085,
"acc_norm": 0.7480916030534351,
"acc_norm_stderr": 0.03807387116306085
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7851239669421488,
"acc_stderr": 0.03749492448709695,
"acc_norm": 0.7851239669421488,
"acc_norm_stderr": 0.03749492448709695
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8055555555555556,
"acc_stderr": 0.038260763248848646,
"acc_norm": 0.8055555555555556,
"acc_norm_stderr": 0.038260763248848646
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7361963190184049,
"acc_stderr": 0.03462419931615623,
"acc_norm": 0.7361963190184049,
"acc_norm_stderr": 0.03462419931615623
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4375,
"acc_stderr": 0.04708567521880525,
"acc_norm": 0.4375,
"acc_norm_stderr": 0.04708567521880525
},
"harness|hendrycksTest-management|5": {
"acc": 0.8252427184466019,
"acc_stderr": 0.03760178006026621,
"acc_norm": 0.8252427184466019,
"acc_norm_stderr": 0.03760178006026621
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8461538461538461,
"acc_stderr": 0.023636873317489274,
"acc_norm": 0.8461538461538461,
"acc_norm_stderr": 0.023636873317489274
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8071519795657727,
"acc_stderr": 0.014108533515757431,
"acc_norm": 0.8071519795657727,
"acc_norm_stderr": 0.014108533515757431
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7052023121387283,
"acc_stderr": 0.024547617794803828,
"acc_norm": 0.7052023121387283,
"acc_norm_stderr": 0.024547617794803828
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3486033519553073,
"acc_stderr": 0.01593748465668703,
"acc_norm": 0.3486033519553073,
"acc_norm_stderr": 0.01593748465668703
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6764705882352942,
"acc_stderr": 0.026787453111906504,
"acc_norm": 0.6764705882352942,
"acc_norm_stderr": 0.026787453111906504
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6945337620578779,
"acc_stderr": 0.026160584450140446,
"acc_norm": 0.6945337620578779,
"acc_norm_stderr": 0.026160584450140446
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7407407407407407,
"acc_stderr": 0.02438366553103545,
"acc_norm": 0.7407407407407407,
"acc_norm_stderr": 0.02438366553103545
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4929078014184397,
"acc_stderr": 0.02982449855912901,
"acc_norm": 0.4929078014184397,
"acc_norm_stderr": 0.02982449855912901
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4680573663624511,
"acc_stderr": 0.012744149704869647,
"acc_norm": 0.4680573663624511,
"acc_norm_stderr": 0.012744149704869647
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6617647058823529,
"acc_stderr": 0.02873932851398357,
"acc_norm": 0.6617647058823529,
"acc_norm_stderr": 0.02873932851398357
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6405228758169934,
"acc_stderr": 0.01941253924203216,
"acc_norm": 0.6405228758169934,
"acc_norm_stderr": 0.01941253924203216
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302506,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302506
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6979591836734694,
"acc_stderr": 0.029393609319879804,
"acc_norm": 0.6979591836734694,
"acc_norm_stderr": 0.029393609319879804
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8109452736318408,
"acc_stderr": 0.027686913588013014,
"acc_norm": 0.8109452736318408,
"acc_norm_stderr": 0.027686913588013014
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.9,
"acc_stderr": 0.030151134457776334,
"acc_norm": 0.9,
"acc_norm_stderr": 0.030151134457776334
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5120481927710844,
"acc_stderr": 0.03891364495835816,
"acc_norm": 0.5120481927710844,
"acc_norm_stderr": 0.03891364495835816
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8245614035087719,
"acc_stderr": 0.029170885500727665,
"acc_norm": 0.8245614035087719,
"acc_norm_stderr": 0.029170885500727665
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3023255813953488,
"mc1_stderr": 0.016077509266133036,
"mc2": 0.4543017595862846,
"mc2_stderr": 0.015109332514210328
},
"harness|winogrande|5": {
"acc": 0.8105761641673244,
"acc_stderr": 0.011012790432989243
},
"harness|drop|3": {
"em": 0.0026216442953020135,
"em_stderr": 0.0005236685642965895,
"f1": 0.0692680369127516,
"f1_stderr": 0.0014684205896877763
},
"harness|gsm8k|5": {
"acc": 0.2577710386656558,
"acc_stderr": 0.012048370213576602
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed] | open-llm-leaderboard/details_openchat__openchat_3.5 | [
"region:us"
]
| 2023-11-18T16:18:03+00:00 | {"pretty_name": "Evaluation run of openchat/openchat_3.5", "dataset_summary": "Dataset automatically created during the evaluation run of model [openchat/openchat_3.5](https://huggingface.co/openchat/openchat_3.5) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_openchat__openchat_3.5_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-19T10:30:18.054013](https://huggingface.co/datasets/open-llm-leaderboard/details_openchat__openchat_3.5_public/blob/main/results_2023-11-19T10-30-18.054013.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6253361427748827,\n \"acc_stderr\": 0.03243199538325514,\n \"acc_norm\": 0.6324168865850391,\n \"acc_norm_stderr\": 0.033117338974973515,\n \"mc1\": 0.3023255813953488,\n \"mc1_stderr\": 0.016077509266133036,\n \"mc2\": 0.4543017595862846,\n \"mc2_stderr\": 0.015109332514210328,\n \"em\": 0.0026216442953020135,\n \"em_stderr\": 0.0005236685642965895,\n \"f1\": 0.0692680369127516,\n \"f1_stderr\": 0.0014684205896877763\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5836177474402731,\n \"acc_stderr\": 0.014405618279436174,\n \"acc_norm\": 0.6245733788395904,\n \"acc_norm_stderr\": 0.014150631435111728\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6450906193985262,\n \"acc_stderr\": 0.0047750796365670966,\n \"acc_norm\": 0.839573790081657,\n \"acc_norm_stderr\": 0.003662508272330902\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.04824181513244218,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.04824181513244218\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5851851851851851,\n \"acc_stderr\": 0.04256193767901408,\n \"acc_norm\": 0.5851851851851851,\n \"acc_norm_stderr\": 0.04256193767901408\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6776315789473685,\n \"acc_stderr\": 0.03803510248351585,\n \"acc_norm\": 0.6776315789473685,\n \"acc_norm_stderr\": 0.03803510248351585\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.55,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.55,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.690566037735849,\n \"acc_stderr\": 0.02845015479411864,\n \"acc_norm\": 0.690566037735849,\n \"acc_norm_stderr\": 0.02845015479411864\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7222222222222222,\n \"acc_stderr\": 0.03745554791462455,\n \"acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.03745554791462455\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.046882617226215034,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.046882617226215034\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6473988439306358,\n \"acc_stderr\": 0.036430371689585475,\n \"acc_norm\": 0.6473988439306358,\n \"acc_norm_stderr\": 0.036430371689585475\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4117647058823529,\n \"acc_stderr\": 0.04897104952726366,\n \"acc_norm\": 0.4117647058823529,\n \"acc_norm_stderr\": 0.04897104952726366\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5276595744680851,\n \"acc_stderr\": 0.03263597118409769,\n \"acc_norm\": 0.5276595744680851,\n \"acc_norm_stderr\": 0.03263597118409769\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4473684210526316,\n \"acc_stderr\": 0.04677473004491199,\n \"acc_norm\": 0.4473684210526316,\n \"acc_norm_stderr\": 0.04677473004491199\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5448275862068965,\n \"acc_stderr\": 0.04149886942192117,\n \"acc_norm\": 0.5448275862068965,\n \"acc_norm_stderr\": 0.04149886942192117\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4021164021164021,\n \"acc_stderr\": 0.02525303255499769,\n \"acc_norm\": 0.4021164021164021,\n \"acc_norm_stderr\": 0.02525303255499769\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.04472135954999579,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.04472135954999579\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.31,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.31,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7483870967741936,\n \"acc_stderr\": 0.024685979286239963,\n \"acc_norm\": 0.7483870967741936,\n \"acc_norm_stderr\": 0.024685979286239963\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.49261083743842365,\n \"acc_stderr\": 0.03517603540361008,\n \"acc_norm\": 0.49261083743842365,\n \"acc_norm_stderr\": 0.03517603540361008\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.68,\n \"acc_stderr\": 0.04688261722621505,\n \"acc_norm\": 0.68,\n \"acc_norm_stderr\": 0.04688261722621505\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7818181818181819,\n \"acc_stderr\": 0.032250781083062896,\n \"acc_norm\": 0.7818181818181819,\n \"acc_norm_stderr\": 0.032250781083062896\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7676767676767676,\n \"acc_stderr\": 0.03008862949021749,\n \"acc_norm\": 0.7676767676767676,\n \"acc_norm_stderr\": 0.03008862949021749\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8652849740932642,\n \"acc_stderr\": 0.024639789097709447,\n \"acc_norm\": 0.8652849740932642,\n \"acc_norm_stderr\": 0.024639789097709447\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6307692307692307,\n \"acc_stderr\": 0.02446861524147893,\n \"acc_norm\": 0.6307692307692307,\n \"acc_norm_stderr\": 0.02446861524147893\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.37777777777777777,\n \"acc_stderr\": 0.02956070739246572,\n \"acc_norm\": 0.37777777777777777,\n \"acc_norm_stderr\": 0.02956070739246572\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6512605042016807,\n \"acc_stderr\": 0.030956636328566545,\n \"acc_norm\": 0.6512605042016807,\n \"acc_norm_stderr\": 0.030956636328566545\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.32450331125827814,\n \"acc_stderr\": 0.038227469376587525,\n \"acc_norm\": 0.32450331125827814,\n \"acc_norm_stderr\": 0.038227469376587525\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8477064220183487,\n \"acc_stderr\": 0.015405084393157074,\n \"acc_norm\": 0.8477064220183487,\n \"acc_norm_stderr\": 0.015405084393157074\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.48148148148148145,\n \"acc_stderr\": 0.034076320938540516,\n \"acc_norm\": 0.48148148148148145,\n \"acc_norm_stderr\": 0.034076320938540516\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7843137254901961,\n \"acc_stderr\": 0.028867431449849316,\n \"acc_norm\": 0.7843137254901961,\n \"acc_norm_stderr\": 0.028867431449849316\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8185654008438819,\n \"acc_stderr\": 0.025085961144579647,\n \"acc_norm\": 0.8185654008438819,\n \"acc_norm_stderr\": 0.025085961144579647\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6905829596412556,\n \"acc_stderr\": 0.031024411740572206,\n \"acc_norm\": 0.6905829596412556,\n \"acc_norm_stderr\": 0.031024411740572206\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7480916030534351,\n \"acc_stderr\": 0.03807387116306085,\n \"acc_norm\": 0.7480916030534351,\n \"acc_norm_stderr\": 0.03807387116306085\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7851239669421488,\n \"acc_stderr\": 0.03749492448709695,\n \"acc_norm\": 0.7851239669421488,\n \"acc_norm_stderr\": 0.03749492448709695\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8055555555555556,\n \"acc_stderr\": 0.038260763248848646,\n \"acc_norm\": 0.8055555555555556,\n \"acc_norm_stderr\": 0.038260763248848646\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7361963190184049,\n \"acc_stderr\": 0.03462419931615623,\n \"acc_norm\": 0.7361963190184049,\n \"acc_norm_stderr\": 0.03462419931615623\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4375,\n \"acc_stderr\": 0.04708567521880525,\n \"acc_norm\": 0.4375,\n \"acc_norm_stderr\": 0.04708567521880525\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8252427184466019,\n \"acc_stderr\": 0.03760178006026621,\n \"acc_norm\": 0.8252427184466019,\n \"acc_norm_stderr\": 0.03760178006026621\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8461538461538461,\n \"acc_stderr\": 0.023636873317489274,\n \"acc_norm\": 0.8461538461538461,\n \"acc_norm_stderr\": 0.023636873317489274\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8071519795657727,\n \"acc_stderr\": 0.014108533515757431,\n \"acc_norm\": 0.8071519795657727,\n \"acc_norm_stderr\": 0.014108533515757431\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7052023121387283,\n \"acc_stderr\": 0.024547617794803828,\n \"acc_norm\": 0.7052023121387283,\n \"acc_norm_stderr\": 0.024547617794803828\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3486033519553073,\n \"acc_stderr\": 0.01593748465668703,\n \"acc_norm\": 0.3486033519553073,\n \"acc_norm_stderr\": 0.01593748465668703\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6764705882352942,\n \"acc_stderr\": 0.026787453111906504,\n \"acc_norm\": 0.6764705882352942,\n \"acc_norm_stderr\": 0.026787453111906504\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6945337620578779,\n \"acc_stderr\": 0.026160584450140446,\n \"acc_norm\": 0.6945337620578779,\n \"acc_norm_stderr\": 0.026160584450140446\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7407407407407407,\n \"acc_stderr\": 0.02438366553103545,\n \"acc_norm\": 0.7407407407407407,\n \"acc_norm_stderr\": 0.02438366553103545\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4929078014184397,\n \"acc_stderr\": 0.02982449855912901,\n \"acc_norm\": 0.4929078014184397,\n \"acc_norm_stderr\": 0.02982449855912901\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4680573663624511,\n \"acc_stderr\": 0.012744149704869647,\n \"acc_norm\": 0.4680573663624511,\n \"acc_norm_stderr\": 0.012744149704869647\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6617647058823529,\n \"acc_stderr\": 0.02873932851398357,\n \"acc_norm\": 0.6617647058823529,\n \"acc_norm_stderr\": 0.02873932851398357\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6405228758169934,\n \"acc_stderr\": 0.01941253924203216,\n \"acc_norm\": 0.6405228758169934,\n \"acc_norm_stderr\": 0.01941253924203216\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6979591836734694,\n \"acc_stderr\": 0.029393609319879804,\n \"acc_norm\": 0.6979591836734694,\n \"acc_norm_stderr\": 0.029393609319879804\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8109452736318408,\n \"acc_stderr\": 0.027686913588013014,\n \"acc_norm\": 0.8109452736318408,\n \"acc_norm_stderr\": 0.027686913588013014\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.9,\n \"acc_stderr\": 0.030151134457776334,\n \"acc_norm\": 0.9,\n \"acc_norm_stderr\": 0.030151134457776334\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5120481927710844,\n \"acc_stderr\": 0.03891364495835816,\n \"acc_norm\": 0.5120481927710844,\n \"acc_norm_stderr\": 0.03891364495835816\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8245614035087719,\n \"acc_stderr\": 0.029170885500727665,\n \"acc_norm\": 0.8245614035087719,\n \"acc_norm_stderr\": 0.029170885500727665\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3023255813953488,\n \"mc1_stderr\": 0.016077509266133036,\n \"mc2\": 0.4543017595862846,\n \"mc2_stderr\": 0.015109332514210328\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8105761641673244,\n \"acc_stderr\": 0.011012790432989243\n },\n \"harness|drop|3\": {\n \"em\": 0.0026216442953020135,\n \"em_stderr\": 0.0005236685642965895,\n \"f1\": 0.0692680369127516,\n \"f1_stderr\": 0.0014684205896877763\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.2577710386656558,\n \"acc_stderr\": 0.012048370213576602\n }\n}\n```", "repo_url": "https://huggingface.co/openchat/openchat_3.5", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|arc:challenge|25_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|arc:challenge|25_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|arc:challenge|25_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|drop|3_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|drop|3_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|drop|3_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|gsm8k|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|gsm8k|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|gsm8k|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hellaswag|10_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hellaswag|10_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hellaswag|10_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-18T16-15-03.792286.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-18T16-22-29.903207.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-management|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-virology|5_2023-11-19T10-30-18.054013.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["**/details_harness|winogrande|5_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["**/details_harness|winogrande|5_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["**/details_harness|winogrande|5_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-19T10-30-18.054013.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_18T16_15_03.792286", "path": ["results_2023-11-18T16-15-03.792286.parquet"]}, {"split": "2023_11_18T16_22_29.903207", "path": ["results_2023-11-18T16-22-29.903207.parquet"]}, {"split": "2023_11_19T10_30_18.054013", "path": ["results_2023-11-19T10-30-18.054013.parquet"]}, {"split": "latest", "path": ["results_2023-11-19T10-30-18.054013.parquet"]}]}]} | 2023-11-19T10:34:09+00:00 | []
| []
| TAGS
#region-us
|
# Dataset Card for Evaluation run of openchat/openchat_3.5
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model openchat/openchat_3.5 on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-19T10:30:18.054013(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
| [
"# Dataset Card for Evaluation run of openchat/openchat_3.5",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model openchat/openchat_3.5 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-19T10:30:18.054013(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of openchat/openchat_3.5",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model openchat/openchat_3.5 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-19T10:30:18.054013(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
]
| [
6,
16,
31,
165,
68,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
]
| [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of openchat/openchat_3.5## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model openchat/openchat_3.5 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-19T10:30:18.054013(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
]
|
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.