sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
95887a7bdc10e6691c47567e2fda4f24a676cf27
|
# Dataset Card for "wikipedia-zhtw"
維基百科數據集包含許多不同語言的文章。這個數據集是根據 Wikipedia dumps (https://dumps.wikimedia.org/) 裡頭 `zhwiki` 的中文下載檔案來建構的。每個範例都包含一篇完整的維基百科文章的內容,並經過清理以去除不需要的部分(例如參考文獻等)。
- **Homepage:** [https://dumps.wikimedia.org](https://dumps.wikimedia.org)
- **zhwiki 下載點:** [https://dumps.wikimedia.org/zhwiki](https://dumps.wikimedia.org/zhwiki)
## 數據 Dump 版本
由於維基百科數據集定期會進行網站數據拋轉,在 `2023/10/10` 的時間點去查看時會有下列的數據可供下載:
|數據 Dump 目錄|拋轉時間點|
|-------------|--------|
|`20230620/`|01-Aug-2023 09:31|
|`20230701/`|20-Aug-2023 09:41|
|`20230720/`|01-Sep-2023 09:31|
|`20230801/`|20-Sep-2023 09:38|
|`20230820/`|01-Oct-2023 09:34|
|`20230901/`|04-Sep-2023 21:18|
|`20230920/`|22-Sep-2023 01:59|
|`20231001/`|10-Oct-2023 02:55|
|`latest/`|10-Oct-2023 02:55|
本數據集會定期去取得最近有明確的日期來進行下載與清理,便於驗證與使用。
## 數據下載清理
1. 下載 zhwiki 的 data dump 檔案
2. 使用 [WikiExtractor](https://github.com/attardi/wikiextractor) 套件來進行文件內容萃取
3. 進行數據清理并轉換成 jsonl 格式檔案
4. 使用 Huggingface [Datasets](https://pypi.org/project/datasets/) 套件來載入 jsonl 并上傳至 Huggingface Hub
## 資料集結構
範例如下:
{'id': '333',
'url': 'https://zh.wikipedia.org/wiki?curid=333',
'title': '鄧麗君',
'text': '鄧麗君,臺灣歌手、演員及慈善家,本名鄧麗筠。她是20世紀後期華語流行音樂具代表性的人物...'
}
## 資料欄位
所有配置中的資料欄位都是相同的:
- `id (str)`: 文章的 ID。
- `url (str)`: 文章的 URL。
- `title (str)`: 文章的標題。
- `text (str)`: 文章的文字內容。
## 使用
```python
from datasets import load_dataset
# 請在第二個參數去指定要使用的數據 dump 的日期
load_dataset("erhwenkuo/wikipedia-zhtw", "20231001")
```
## 許可資訊
維基百科的大部分文章內容及其許多圖像均根據 `Creative Commons Attribution-ShareAlike 3.0 Unported License (CC BY-SA)` 和 `GNU Free Documentation License (GFDL)` 共同授權。
## Citation
```
@ONLINE{wikidump,
author = "Wikimedia Foundation",
title = "Wikimedia Downloads",
url = "https://dumps.wikimedia.org"
}
```
|
erhwenkuo/wikipedia-zhtw
|
[
"task_categories:text-generation",
"task_categories:fill-mask",
"size_categories:1M<n<10M",
"language:zh",
"license:cc-by-sa-3.0",
"region:us"
] |
2023-10-10T01:31:00+00:00
|
{"language": ["zh"], "license": "cc-by-sa-3.0", "size_categories": ["1M<n<10M"], "task_categories": ["text-generation", "fill-mask"], "dataset_info": {"config_name": "20231001", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1682641991, "num_examples": 1373081}], "download_size": 1064907519, "dataset_size": 1682641991}, "configs": [{"config_name": "20231001", "data_files": [{"split": "train", "path": "20231001/train-*"}]}]}
|
2023-10-10T02:22:43+00:00
|
[] |
[
"zh"
] |
TAGS
#task_categories-text-generation #task_categories-fill-mask #size_categories-1M<n<10M #language-Chinese #license-cc-by-sa-3.0 #region-us
|
Dataset Card for "wikipedia-zhtw"
=================================
維基百科數據集包含許多不同語言的文章。這個數據集是根據 Wikipedia dumps (URL 裡頭 'zhwiki' 的中文下載檔案來建構的。每個範例都包含一篇完整的維基百科文章的內容,並經過清理以去除不需要的部分(例如參考文獻等)。
* Homepage: URL
* zhwiki 下載點: URL
數據 Dump 版本
----------
由於維基百科數據集定期會進行網站數據拋轉,在 '2023/10/10' 的時間點去查看時會有下列的數據可供下載:
本數據集會定期去取得最近有明確的日期來進行下載與清理,便於驗證與使用。
數據下載清理
------
1. 下載 zhwiki 的 data dump 檔案
2. 使用 WikiExtractor 套件來進行文件內容萃取
3. 進行數據清理并轉換成 jsonl 格式檔案
4. 使用 Huggingface Datasets 套件來載入 jsonl 并上傳至 Huggingface Hub
資料集結構
-----
範例如下:
{'id': '333',
'url': 'URL
'title': '鄧麗君',
'text': '鄧麗君,臺灣歌手、演員及慈善家,本名鄧麗筠。她是20世紀後期華語流行音樂具代表性的人物...'
}
資料欄位
----
所有配置中的資料欄位都是相同的:
* 'id (str)': 文章的 ID。
* 'url (str)': 文章的 URL。
* 'title (str)': 文章的標題。
* 'text (str)': 文章的文字內容。
使用
--
許可資訊
----
維基百科的大部分文章內容及其許多圖像均根據 'Creative Commons Attribution-ShareAlike 3.0 Unported License (CC BY-SA)' 和 'GNU Free Documentation License (GFDL)' 共同授權。
|
[] |
[
"TAGS\n#task_categories-text-generation #task_categories-fill-mask #size_categories-1M<n<10M #language-Chinese #license-cc-by-sa-3.0 #region-us \n"
] |
[
56
] |
[
"passage: TAGS\n#task_categories-text-generation #task_categories-fill-mask #size_categories-1M<n<10M #language-Chinese #license-cc-by-sa-3.0 #region-us \n"
] |
8ec3893029606ff896b8ae180005c1f7225a6212
|
# Dataset Card for "CCU_QA_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Harshithacj123/CCU_QA_dataset
|
[
"region:us"
] |
2023-10-10T01:31:44+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 41353, "num_examples": 50}], "download_size": 23370, "dataset_size": 41353}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-10T01:31:46+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "CCU_QA_dataset"
More Information needed
|
[
"# Dataset Card for \"CCU_QA_dataset\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"CCU_QA_dataset\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"CCU_QA_dataset\"\n\nMore Information needed"
] |
2d701b123a5ae925abe51de4b0810ceee7aeccbd
|
# Dataset Card for "purdue_reddit_posts_2017_2022"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
sheepy928/purdue_reddit_posts_2017_2022
|
[
"region:us"
] |
2023-10-10T01:49:54+00:00
|
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "selftext", "dtype": "string"}, {"name": "created_utc", "dtype": "timestamp[ns]"}, {"name": "url", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 25865572, "num_examples": 78849}], "download_size": 15617426, "dataset_size": 25865572}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-10T01:49:57+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "purdue_reddit_posts_2017_2022"
More Information needed
|
[
"# Dataset Card for \"purdue_reddit_posts_2017_2022\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"purdue_reddit_posts_2017_2022\"\n\nMore Information needed"
] |
[
6,
23
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"purdue_reddit_posts_2017_2022\"\n\nMore Information needed"
] |
d8fc36a8490f8e7e8d35fd26bb1dc0a110d260e8
|
# Dataset Card for "Purdue_reddit_posts_1500"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
sheepy928/Purdue_reddit_posts_1500_unlabelled
|
[
"region:us"
] |
2023-10-10T01:50:02+00:00
|
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "selftext", "dtype": "string"}, {"name": "created_utc", "dtype": "timestamp[ns]"}, {"name": "url", "dtype": "string"}, {"name": "author", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 504948, "num_examples": 1500}], "download_size": 321568, "dataset_size": 504948}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-10T01:50:03+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Purdue_reddit_posts_1500"
More Information needed
|
[
"# Dataset Card for \"Purdue_reddit_posts_1500\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Purdue_reddit_posts_1500\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Purdue_reddit_posts_1500\"\n\nMore Information needed"
] |
cbe989efb671b00158463244f48b8cc7247eacf6
|
# Dataset Card for "wile-e"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
open-phi/wile-e
|
[
"region:us"
] |
2023-10-10T02:02:39+00:00
|
{"dataset_info": {"features": [{"name": "topic", "dtype": "string"}, {"name": "model", "dtype": "string"}, {"name": "concepts", "sequence": "string"}, {"name": "outline", "sequence": "string"}, {"name": "markdown", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 108171787, "num_examples": 933}], "download_size": 41387101, "dataset_size": 108171787}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-10T02:02:55+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "wile-e"
More Information needed
|
[
"# Dataset Card for \"wile-e\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"wile-e\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"wile-e\"\n\nMore Information needed"
] |
678bed3d675aa4876798c58e17aea59ac4f17200
|
# Dataset Card for "test_split_with_embeddings_bert_base_portuguese"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
iara-project/test_split_with_embeddings_bert_base_portuguese
|
[
"region:us"
] |
2023-10-10T02:04:00+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "news_id", "dtype": "int64"}, {"name": "embeddings", "dtype": "int64"}, {"name": "sentence", "dtype": "string"}, {"name": "category", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 588008891, "num_examples": 176114}], "download_size": 365796407, "dataset_size": 588008891}}
|
2023-10-10T02:04:17+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "test_split_with_embeddings_bert_base_portuguese"
More Information needed
|
[
"# Dataset Card for \"test_split_with_embeddings_bert_base_portuguese\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"test_split_with_embeddings_bert_base_portuguese\"\n\nMore Information needed"
] |
[
6,
28
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"test_split_with_embeddings_bert_base_portuguese\"\n\nMore Information needed"
] |
5f053dd00f45cdf251ad3dbc24eeb6543fd6c3a4
|
# Dataset Card for "ft-sample"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
open-phi/ft-sample
|
[
"region:us"
] |
2023-10-10T02:04:33+00:00
|
{"dataset_info": {"features": [{"name": "topic", "dtype": "string"}, {"name": "model", "dtype": "string"}, {"name": "concepts", "sequence": "string"}, {"name": "outline", "sequence": "string"}, {"name": "markdown", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 299599047, "num_examples": 4121}], "download_size": 92594714, "dataset_size": 299599047}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-12T04:10:35+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "ft-sample"
More Information needed
|
[
"# Dataset Card for \"ft-sample\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"ft-sample\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"ft-sample\"\n\nMore Information needed"
] |
ef22a45c73a66df72f0d3e59cbd84df3b59a4756
|
# Dataset Card for "psychology-dataset-v2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
fiveflow/psychology-dataset-v2
|
[
"region:us"
] |
2023-10-10T02:18:11+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 716480, "num_examples": 996}], "download_size": 189768, "dataset_size": 716480}}
|
2023-10-10T04:03:48+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "psychology-dataset-v2"
More Information needed
|
[
"# Dataset Card for \"psychology-dataset-v2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"psychology-dataset-v2\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"psychology-dataset-v2\"\n\nMore Information needed"
] |
f107022db1f326a13336d358939d5f027deacd72
|
数据集包含了一本族谱的封面和164页内容,是竖版的中文简体和繁体字的组合。
The dataset contains the cover and 164 pages of a family tree, which is a combination of simplified and traditional Chinese characters in a vertical version.
|
mmhzlrj/Genealogy
|
[
"language:zh",
"license:apache-2.0",
"region:us"
] |
2023-10-10T02:20:08+00:00
|
{"language": ["zh"], "license": "apache-2.0"}
|
2023-10-10T02:34:14+00:00
|
[] |
[
"zh"
] |
TAGS
#language-Chinese #license-apache-2.0 #region-us
|
数据集包含了一本族谱的封面和164页内容,是竖版的中文简体和繁体字的组合。
The dataset contains the cover and 164 pages of a family tree, which is a combination of simplified and traditional Chinese characters in a vertical version.
|
[] |
[
"TAGS\n#language-Chinese #license-apache-2.0 #region-us \n"
] |
[
19
] |
[
"passage: TAGS\n#language-Chinese #license-apache-2.0 #region-us \n"
] |
4eb085d2b45e5e4cdf60557c6e5fadee2308cfbe
|
# AutoTrain Dataset for project: resnet50_test
## Dataset Description
This dataset has been automatically processed by AutoTrain for project resnet50_test.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<1920x1920 RGB PIL image>",
"target": 2
},
{
"image": "<1080x721 RGB PIL image>",
"target": 2
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['000', '005', '033'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 2244 |
| valid | 564 |
|
SRGui/autotrain-data-resnet50_test
|
[
"task_categories:image-classification",
"region:us"
] |
2023-10-10T02:27:16+00:00
|
{"task_categories": ["image-classification"]}
|
2023-10-10T04:31:27+00:00
|
[] |
[] |
TAGS
#task_categories-image-classification #region-us
|
AutoTrain Dataset for project: resnet50\_test
=============================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project resnet50\_test.
### Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
|
[
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
[
"TAGS\n#task_categories-image-classification #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
[
17,
27,
17,
23,
27
] |
[
"passage: TAGS\n#task_categories-image-classification #region-us \n### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nA sample from this dataset looks as follows:### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
4ede5cd4a659ae02bed32ecfe70340d90378263d
|
* 2023.10.22更新:不是谷歌翻译的,更高质量的,[中文长文本问答数据集](https://huggingface.co/datasets/yuyijiong/Long-Instruction-Chinese)已经推出,但部分数据量还不足,正在持续增加中。
* 2023.10.18更新:删除一些重复和低质量数据。改进了答案和指令格式。
# 中文长文本指令微调数据集-汇编
由于目前中文数据不足,大部分数据都是从英文数据集通过谷歌翻译过来的,翻译质量略有待提升,目前勉强能用。未来可能还会增加数据。 \
大部分数据经过筛选,长度(字符数)大于8000,以满足长文本微调的需要 \
指令微调数据都已经转化为llama的chat格式 : ```"<s>Human: " + question + "\n</s>" + "<s>Assistant: “ + answer + "\n</s>"```
因为中文长度普遍短于英文,很多英文翻译为中文后,文本长度显著缩短。
数据组成:
## 1. LongAlpaca数据集
数据来源:[Yukang/LongAlpaca-12k](https://huggingface.co/datasets/Yukang/LongAlpaca-12k) \
原数据集已经被拆分为 book_sum、paper_qa、paper_review、paper_compare、paper_conclusion 5部分,一共约9k的长文本指令微调数据 \
原数据集中还包含3k的抽取自alpaca数据集的短文本指令微调,此项目中不包含这部分alpaca数据集。
## 2. 多文档问答数据集
数据来源: [togethercomputer/Long-Data-Collections](https://huggingface.co/datasets/togethercomputer/Long-Data-Collections) \
从8万多个英文的多文档问答数据中,选出翻译质量较好的4000条。指令中给出几十或上百个参考文档,只有一个文档含有有用信息,回答中不仅要回答问题,还需要指出参考文档的序号。
## 3. SharGPT中文长对话数据集
数据来源:[shibing624/sharegpt_gpt4](https://huggingface.co/datasets/shibing624/sharegpt_gpt4/tree/main) \
来自于gpt3.5和gpt4的多轮对话数据,仅包含中文,文本长度大于4000
|
yuyijiong/Long-instruction-en2zh
|
[
"task_categories:text-generation",
"task_categories:conversational",
"task_categories:summarization",
"size_categories:10K<n<100K",
"language:zh",
"license:cc-by-nc-4.0",
"region:us"
] |
2023-10-10T02:31:48+00:00
|
{"language": ["zh"], "license": "cc-by-nc-4.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation", "conversational", "summarization"]}
|
2023-11-20T02:47:34+00:00
|
[] |
[
"zh"
] |
TAGS
#task_categories-text-generation #task_categories-conversational #task_categories-summarization #size_categories-10K<n<100K #language-Chinese #license-cc-by-nc-4.0 #region-us
|
* 2023.10.22更新:不是谷歌翻译的,更高质量的,中文长文本问答数据集已经推出,但部分数据量还不足,正在持续增加中。
* 2023.10.18更新:删除一些重复和低质量数据。改进了答案和指令格式。
# 中文长文本指令微调数据集-汇编
由于目前中文数据不足,大部分数据都是从英文数据集通过谷歌翻译过来的,翻译质量略有待提升,目前勉强能用。未来可能还会增加数据。 \
大部分数据经过筛选,长度(字符数)大于8000,以满足长文本微调的需要 \
指令微调数据都已经转化为llama的chat格式 :
因为中文长度普遍短于英文,很多英文翻译为中文后,文本长度显著缩短。
数据组成:
## 1. LongAlpaca数据集
数据来源:Yukang/LongAlpaca-12k \
原数据集已经被拆分为 book_sum、paper_qa、paper_review、paper_compare、paper_conclusion 5部分,一共约9k的长文本指令微调数据 \
原数据集中还包含3k的抽取自alpaca数据集的短文本指令微调,此项目中不包含这部分alpaca数据集。
## 2. 多文档问答数据集
数据来源: togethercomputer/Long-Data-Collections \
从8万多个英文的多文档问答数据中,选出翻译质量较好的4000条。指令中给出几十或上百个参考文档,只有一个文档含有有用信息,回答中不仅要回答问题,还需要指出参考文档的序号。
## 3. SharGPT中文长对话数据集
数据来源:shibing624/sharegpt_gpt4 \
来自于gpt3.5和gpt4的多轮对话数据,仅包含中文,文本长度大于4000
|
[
"# 中文长文本指令微调数据集-汇编\n由于目前中文数据不足,大部分数据都是从英文数据集通过谷歌翻译过来的,翻译质量略有待提升,目前勉强能用。未来可能还会增加数据。 \\\n大部分数据经过筛选,长度(字符数)大于8000,以满足长文本微调的需要 \\\n指令微调数据都已经转化为llama的chat格式 : \n因为中文长度普遍短于英文,很多英文翻译为中文后,文本长度显著缩短。\n\n数据组成:",
"## 1. LongAlpaca数据集\n数据来源:Yukang/LongAlpaca-12k \\\n原数据集已经被拆分为 book_sum、paper_qa、paper_review、paper_compare、paper_conclusion 5部分,一共约9k的长文本指令微调数据 \\\n原数据集中还包含3k的抽取自alpaca数据集的短文本指令微调,此项目中不包含这部分alpaca数据集。",
"## 2. 多文档问答数据集\n数据来源: togethercomputer/Long-Data-Collections \\\n从8万多个英文的多文档问答数据中,选出翻译质量较好的4000条。指令中给出几十或上百个参考文档,只有一个文档含有有用信息,回答中不仅要回答问题,还需要指出参考文档的序号。",
"## 3. SharGPT中文长对话数据集\n数据来源:shibing624/sharegpt_gpt4 \\\n来自于gpt3.5和gpt4的多轮对话数据,仅包含中文,文本长度大于4000"
] |
[
"TAGS\n#task_categories-text-generation #task_categories-conversational #task_categories-summarization #size_categories-10K<n<100K #language-Chinese #license-cc-by-nc-4.0 #region-us \n",
"# 中文长文本指令微调数据集-汇编\n由于目前中文数据不足,大部分数据都是从英文数据集通过谷歌翻译过来的,翻译质量略有待提升,目前勉强能用。未来可能还会增加数据。 \\\n大部分数据经过筛选,长度(字符数)大于8000,以满足长文本微调的需要 \\\n指令微调数据都已经转化为llama的chat格式 : \n因为中文长度普遍短于英文,很多英文翻译为中文后,文本长度显著缩短。\n\n数据组成:",
"## 1. LongAlpaca数据集\n数据来源:Yukang/LongAlpaca-12k \\\n原数据集已经被拆分为 book_sum、paper_qa、paper_review、paper_compare、paper_conclusion 5部分,一共约9k的长文本指令微调数据 \\\n原数据集中还包含3k的抽取自alpaca数据集的短文本指令微调,此项目中不包含这部分alpaca数据集。",
"## 2. 多文档问答数据集\n数据来源: togethercomputer/Long-Data-Collections \\\n从8万多个英文的多文档问答数据中,选出翻译质量较好的4000条。指令中给出几十或上百个参考文档,只有一个文档含有有用信息,回答中不仅要回答问题,还需要指出参考文档的序号。",
"## 3. SharGPT中文长对话数据集\n数据来源:shibing624/sharegpt_gpt4 \\\n来自于gpt3.5和gpt4的多轮对话数据,仅包含中文,文本长度大于4000"
] |
[
65,
125,
105,
87,
52
] |
[
"passage: TAGS\n#task_categories-text-generation #task_categories-conversational #task_categories-summarization #size_categories-10K<n<100K #language-Chinese #license-cc-by-nc-4.0 #region-us \n# 中文长文本指令微调数据集-汇编\n由于目前中文数据不足,大部分数据都是从英文数据集通过谷歌翻译过来的,翻译质量略有待提升,目前勉强能用。未来可能还会增加数据。 \\\n大部分数据经过筛选,长度(字符数)大于8000,以满足长文本微调的需要 \\\n指令微调数据都已经转化为llama的chat格式 : \n因为中文长度普遍短于英文,很多英文翻译为中文后,文本长度显著缩短。\n\n数据组成:## 1. LongAlpaca数据集\n数据来源:Yukang/LongAlpaca-12k \\\n原数据集已经被拆分为 book_sum、paper_qa、paper_review、paper_compare、paper_conclusion 5部分,一共约9k的长文本指令微调数据 \\\n原数据集中还包含3k的抽取自alpaca数据集的短文本指令微调,此项目中不包含这部分alpaca数据集。## 2. 多文档问答数据集\n数据来源: togethercomputer/Long-Data-Collections \\\n从8万多个英文的多文档问答数据中,选出翻译质量较好的4000条。指令中给出几十或上百个参考文档,只有一个文档含有有用信息,回答中不仅要回答问题,还需要指出参考文档的序号。## 3. SharGPT中文长对话数据集\n数据来源:shibing624/sharegpt_gpt4 \\\n来自于gpt3.5和gpt4的多轮对话数据,仅包含中文,文本长度大于4000"
] |
0e49271075554085c0092231ac41e852964d61d3
|
# Dataset Card for "reconstruction"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
englert-m/reconstruction
|
[
"region:us"
] |
2023-10-10T02:37:34+00:00
|
{"dataset_info": {"features": [{"name": "orig", "dtype": "int32"}, {"name": "corrupted", "dtype": "image"}, {"name": "count", "dtype": "int32"}, {"name": "xflip", "dtype": "int64"}, {"name": "yflip", "dtype": "int64"}, {"name": "scale", "dtype": "float32"}, {"name": "rotate_frac", "dtype": "float32"}, {"name": "aniso_w", "dtype": "float32"}, {"name": "aniso_r", "dtype": "float32"}, {"name": "translate_frac", "sequence": "float32"}], "splits": [{"name": "train", "num_bytes": 147813503004.625, "num_examples": 59583403}], "download_size": 155980537726, "dataset_size": 147813503004.625}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2024-02-12T18:48:25+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "reconstruction"
More Information needed
|
[
"# Dataset Card for \"reconstruction\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"reconstruction\"\n\nMore Information needed"
] |
[
6,
12
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"reconstruction\"\n\nMore Information needed"
] |
f6782f89259b3451d8a82e1b3a027804204a6dac
|
# Dataset Card for "wikinews-zhtw"
維基新聞(英文:Wikinews)是由一群志願者、即民間記者運營的網路媒體。同時是一個自由內容的維基,屬維基媒體計劃項目,由維基媒體基金會負責運營。維基新聞通過協作新聞學的工作模式去運行,同時亦努力通過中性的觀點報導新聞,包括原創一手獨家報道和採訪。
這個數據集是根據 Wikipedia dumps (https://dumps.wikimedia.org/) 裡頭 `zhwikinews` 的中文下載檔案來建構的。每個範例都包含一篇完整的維基新聞文章的內容,並經過清理以去除不需要的部分。
- **Homepage:** [https://dumps.wikimedia.org](https://dumps.wikimedia.org)
- **zhwiki 下載點:** [https://dumps.wikimedia.org/zhwikinews](https://dumps.wikimedia.org/zhwikinews/)
## 數據 Dump 版本
由於維基百科數據集定期會進行網站數據拋轉,在 `2023/10/10` 的時間點去查看時會有下列的數據可供下載:
|數據 Dump 目錄|拋轉時間點|
|-------------|--------|
|`20230520/`|01-Jul-2023 09:28|
|`20230601/`|20-Jul-2023 09:41|
|`20230620/`|01-Aug-2023 09:35|
|`20230701/`|20-Aug-2023 09:49|
|`20230720/`|01-Sep-2023 09:35|
|`20230801/`|20-Sep-2023 09:46|
|`20230820/`|01-Oct-2023 09:42|
|`20230901/`|02-Sep-2023 14:47|
|`20230920/`|21-Sep-2023 14:41|
|`20231001/`|10-Oct-2023 03:50|
|`latest/`|10-Oct-2023 03:50|
本數據集會定期去取得最近有明確的日期來進行下載與清理,便於驗證與使用。
## 數據下載清理
1. 下載 zhwiki 的 data dump 檔案
2. 使用 [WikiExtractor](https://github.com/attardi/wikiextractor) 套件來進行文件內容萃取
3. 進行數據清理并轉換成 jsonl 格式檔案
4. 使用 Huggingface [Datasets](https://pypi.org/project/datasets/) 套件來載入 jsonl 并上傳至 Huggingface Hub
## 資料集結構
範例如下:
{'id': '35',
'url': 'https://zh.wikinews.org/wiki?curid=35',
'title': 'EDWIN與CUELLO遭統一獅隊解約',
'text': '曾經打過中國棒球聯賽的兩位外援球員EDWIN(臺譯:愛力)與CUELLO(臺譯:阿-{A|裡}-),昨天傳出...'
}
## 資料欄位
所有配置中的資料欄位都是相同的:
- `id (str)`: 文章的 ID。
- `url (str)`: 文章的 URL。
- `title (str)`: 文章的標題。
- `text (str)`: 文章的文字內容。
## 使用
```python
from datasets import load_dataset
# 請在第二個參數去指定要使用的數據 dump 的日期
load_dataset("erhwenkuo/wikinews-zhtw", "20231001")
```
## 許可資訊
維基百科的大部分文章內容及其許多圖像均根據 `Creative Commons Attribution-ShareAlike 3.0 Unported License (CC BY-SA)` 和 `GNU Free Documentation License (GFDL)` 共同授權。
## Citation
```
@ONLINE{wikidump,
author = "Wikimedia Foundation",
title = "Wikimedia Downloads",
url = "https://dumps.wikimedia.org"
}
```
|
erhwenkuo/wikinews-zhtw
|
[
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:zh",
"license:cc-by-sa-3.0",
"region:us"
] |
2023-10-10T02:55:49+00:00
|
{"language": ["zh"], "license": "cc-by-sa-3.0", "size_categories": ["1K<n<10K"], "task_categories": ["text-generation"], "dataset_info": {"config_name": "20231001", "features": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13647957, "num_examples": 9827}], "download_size": 8803739, "dataset_size": 13647957}, "configs": [{"config_name": "20231001", "data_files": [{"split": "train", "path": "20231001/train-*"}]}]}
|
2023-10-10T03:06:53+00:00
|
[] |
[
"zh"
] |
TAGS
#task_categories-text-generation #size_categories-1K<n<10K #language-Chinese #license-cc-by-sa-3.0 #region-us
|
Dataset Card for "wikinews-zhtw"
================================
維基新聞(英文:Wikinews)是由一群志願者、即民間記者運營的網路媒體。同時是一個自由內容的維基,屬維基媒體計劃項目,由維基媒體基金會負責運營。維基新聞通過協作新聞學的工作模式去運行,同時亦努力通過中性的觀點報導新聞,包括原創一手獨家報道和採訪。
這個數據集是根據 Wikipedia dumps (URL 裡頭 'zhwikinews' 的中文下載檔案來建構的。每個範例都包含一篇完整的維基新聞文章的內容,並經過清理以去除不需要的部分。
* Homepage: URL
* zhwiki 下載點: URL
數據 Dump 版本
----------
由於維基百科數據集定期會進行網站數據拋轉,在 '2023/10/10' 的時間點去查看時會有下列的數據可供下載:
本數據集會定期去取得最近有明確的日期來進行下載與清理,便於驗證與使用。
數據下載清理
------
1. 下載 zhwiki 的 data dump 檔案
2. 使用 WikiExtractor 套件來進行文件內容萃取
3. 進行數據清理并轉換成 jsonl 格式檔案
4. 使用 Huggingface Datasets 套件來載入 jsonl 并上傳至 Huggingface Hub
資料集結構
-----
範例如下:
{'id': '35',
'url': 'URL
'title': 'EDWIN與CUELLO遭統一獅隊解約',
'text': '曾經打過中國棒球聯賽的兩位外援球員EDWIN(臺譯:愛力)與CUELLO(臺譯:阿-{A|裡}-),昨天傳出...'
}
資料欄位
----
所有配置中的資料欄位都是相同的:
* 'id (str)': 文章的 ID。
* 'url (str)': 文章的 URL。
* 'title (str)': 文章的標題。
* 'text (str)': 文章的文字內容。
使用
--
許可資訊
----
維基百科的大部分文章內容及其許多圖像均根據 'Creative Commons Attribution-ShareAlike 3.0 Unported License (CC BY-SA)' 和 'GNU Free Documentation License (GFDL)' 共同授權。
|
[] |
[
"TAGS\n#task_categories-text-generation #size_categories-1K<n<10K #language-Chinese #license-cc-by-sa-3.0 #region-us \n"
] |
[
45
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-1K<n<10K #language-Chinese #license-cc-by-sa-3.0 #region-us \n"
] |
b28f874a68e2eebed72d06c5c2f30555a53ae211
|
# Dataset Card for Evaluation run of PocketDoc/Dans-AdventurousWinds-7b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/PocketDoc/Dans-AdventurousWinds-7b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [PocketDoc/Dans-AdventurousWinds-7b](https://huggingface.co/PocketDoc/Dans-AdventurousWinds-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_PocketDoc__Dans-AdventurousWinds-7b",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-24T16:13:28.760766](https://huggingface.co/datasets/open-llm-leaderboard/details_PocketDoc__Dans-AdventurousWinds-7b/blob/main/results_2023-10-24T16-13-28.760766.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.32791526845637586,
"em_stderr": 0.004807646038011011,
"f1": 0.3764691694630872,
"f1_stderr": 0.004686966609320671,
"acc": 0.46954983116649207,
"acc_stderr": 0.010810156337777745
},
"harness|drop|3": {
"em": 0.32791526845637586,
"em_stderr": 0.004807646038011011,
"f1": 0.3764691694630872,
"f1_stderr": 0.004686966609320671
},
"harness|gsm8k|5": {
"acc": 0.15693707354056102,
"acc_stderr": 0.010019246595616167
},
"harness|winogrande|5": {
"acc": 0.7821625887924231,
"acc_stderr": 0.011601066079939324
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_PocketDoc__Dans-AdventurousWinds-7b
|
[
"region:us"
] |
2023-10-10T03:05:21+00:00
|
{"pretty_name": "Evaluation run of PocketDoc/Dans-AdventurousWinds-7b", "dataset_summary": "Dataset automatically created during the evaluation run of model [PocketDoc/Dans-AdventurousWinds-7b](https://huggingface.co/PocketDoc/Dans-AdventurousWinds-7b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_PocketDoc__Dans-AdventurousWinds-7b\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-24T16:13:28.760766](https://huggingface.co/datasets/open-llm-leaderboard/details_PocketDoc__Dans-AdventurousWinds-7b/blob/main/results_2023-10-24T16-13-28.760766.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.32791526845637586,\n \"em_stderr\": 0.004807646038011011,\n \"f1\": 0.3764691694630872,\n \"f1_stderr\": 0.004686966609320671,\n \"acc\": 0.46954983116649207,\n \"acc_stderr\": 0.010810156337777745\n },\n \"harness|drop|3\": {\n \"em\": 0.32791526845637586,\n \"em_stderr\": 0.004807646038011011,\n \"f1\": 0.3764691694630872,\n \"f1_stderr\": 0.004686966609320671\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.15693707354056102,\n \"acc_stderr\": 0.010019246595616167\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7821625887924231,\n \"acc_stderr\": 0.011601066079939324\n }\n}\n```", "repo_url": "https://huggingface.co/PocketDoc/Dans-AdventurousWinds-7b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|arc:challenge|25_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_24T16_13_28.760766", "path": ["**/details_harness|drop|3_2023-10-24T16-13-28.760766.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-24T16-13-28.760766.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_24T16_13_28.760766", "path": ["**/details_harness|gsm8k|5_2023-10-24T16-13-28.760766.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-24T16-13-28.760766.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hellaswag|10_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T04-04-57.551374.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T04-04-57.551374.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T04-04-57.551374.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_24T16_13_28.760766", "path": ["**/details_harness|winogrande|5_2023-10-24T16-13-28.760766.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-24T16-13-28.760766.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_10T04_04_57.551374", "path": ["results_2023-10-10T04-04-57.551374.parquet"]}, {"split": "2023_10_24T16_13_28.760766", "path": ["results_2023-10-24T16-13-28.760766.parquet"]}, {"split": "latest", "path": ["results_2023-10-24T16-13-28.760766.parquet"]}]}]}
|
2023-10-24T15:13:42+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of PocketDoc/Dans-AdventurousWinds-7b
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model PocketDoc/Dans-AdventurousWinds-7b on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-24T16:13:28.760766(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of PocketDoc/Dans-AdventurousWinds-7b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model PocketDoc/Dans-AdventurousWinds-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-24T16:13:28.760766(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of PocketDoc/Dans-AdventurousWinds-7b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model PocketDoc/Dans-AdventurousWinds-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-24T16:13:28.760766(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
23,
31,
171,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of PocketDoc/Dans-AdventurousWinds-7b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model PocketDoc/Dans-AdventurousWinds-7b on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-24T16:13:28.760766(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
d0bfcbf6b52a4bed932242f1063c748e1aad7210
|
### Dataset Summary
SWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.
### Supported Tasks and Leaderboards
SWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at www.swebench.com
### Languages
The text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.
## Dataset Structure
### Data Instances
An example of a SWE-bench datum is as follows:
```
instance_id: (str) - A formatted instance identifier, usually as repo_owner__repo_name-PR-number.
patch: (str) - The gold patch, the patch generated by the PR (minus test-related code), that resolved the issue.
repo: (str) - The repository owner/name identifier from GitHub.
base_commit: (str) - The commit hash of the repository representing the HEAD of the repository before the solution PR is applied.
hints_text: (str) - Comments made on the issue prior to the creation of the solution PR’s first commit creation date.
created_at: (str) - The creation date of the pull request.
test_patch: (str) - A test-file patch that was contributed by the solution PR.
Problem_statement: (str) - The issue title and body.
Version: (str) - Installation version to use for running evaluation.
environment_setup_commit: (str) - commit hash to use for environment setup and installation.
FAIL_TO_PASS: (str) - A json list of strings that represent the set of tests resolved by the PR and tied to the issue resolution.
PASS_TO_PASS: (str) - A json list of strings that represent tests that should pass before and after the PR application.
text: (str) - The generated text according to the retrieval criterion and the style-2 prompt found in [github:SWE-bench](https://github.com/princeton-nlp/SWE-bench).
input_ids: (List[int]) - The cl100k_base tokens for each text.
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
princeton-nlp/SWE-bench_bm25_13k_cl100k
|
[
"region:us"
] |
2023-10-10T03:09:27+00:00
|
{"dataset_info": {"features": [{"name": "base_commit", "dtype": "string"}, {"name": "hints_text", "dtype": "string"}, {"name": "created_at", "dtype": "string"}, {"name": "test_patch", "dtype": "string"}, {"name": "repo", "dtype": "string"}, {"name": "problem_statement", "dtype": "string"}, {"name": "version", "dtype": "string"}, {"name": "instance_id", "dtype": "string"}, {"name": "FAIL_TO_PASS", "dtype": "string"}, {"name": "PASS_TO_PASS", "dtype": "string"}, {"name": "environment_setup_commit", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "labels", "sequence": "int64"}, {"name": "patch", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 276234065, "num_examples": 2294}], "download_size": 113943225, "dataset_size": 276234065}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}]}
|
2023-10-10T18:31:31+00:00
|
[] |
[] |
TAGS
#region-us
|
### Dataset Summary
SWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.
### Supported Tasks and Leaderboards
SWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at URL
### Languages
The text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.
## Dataset Structure
### Data Instances
An example of a SWE-bench datum is as follows:
More Information needed
|
[
"### Dataset Summary\nSWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.",
"### Supported Tasks and Leaderboards\nSWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at URL",
"### Languages\n\nThe text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.",
"## Dataset Structure",
"### Data Instances\nAn example of a SWE-bench datum is as follows:
\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"### Dataset Summary\nSWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.",
"### Supported Tasks and Leaderboards\nSWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at URL",
"### Languages\n\nThe text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.",
"## Dataset Structure",
"### Data Instances\nAn example of a SWE-bench datum is as follows:
\n\nMore Information needed"
] |
[
6,
68,
43,
30,
6,
24
] |
[
"passage: TAGS\n#region-us \n### Dataset Summary\nSWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.### Supported Tasks and Leaderboards\nSWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at URL### Languages\n\nThe text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.## Dataset Structure### Data Instances\nAn example of a SWE-bench datum is as follows:
\n\nMore Information needed"
] |
cc584b302f6f3e380f287346f03fe9f12ac1c59b
|
### Dataset Summary
SWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.
### Supported Tasks and Leaderboards
SWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at www.swebench.com
### Languages
The text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.
## Dataset Structure
### Data Instances
An example of a SWE-bench datum is as follows:
```
instance_id: (str) - A formatted instance identifier, usually as repo_owner__repo_name-PR-number.
patch: (str) - The gold patch, the patch generated by the PR (minus test-related code), that resolved the issue.
repo: (str) - The repository owner/name identifier from GitHub.
base_commit: (str) - The commit hash of the repository representing the HEAD of the repository before the solution PR is applied.
hints_text: (str) - Comments made on the issue prior to the creation of the solution PR’s first commit creation date.
created_at: (str) - The creation date of the pull request.
test_patch: (str) - A test-file patch that was contributed by the solution PR.
Problem_statement: (str) - The issue title and body.
Version: (str) - Installation version to use for running evaluation.
environment_setup_commit: (str) - commit hash to use for environment setup and installation.
FAIL_TO_PASS: (str) - A json list of strings that represent the set of tests resolved by the PR and tied to the issue resolution.
PASS_TO_PASS: (str) - A json list of strings that represent tests that should pass before and after the PR application.
text: (str) - The generated text according to the retrieval criterion and the style-2 prompt found in [github:SWE-bench](https://github.com/princeton-nlp/SWE-bench).
input_ids: (List[int]) - The cl100k_base tokens for each text.
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
princeton-nlp/SWE-bench_bm25_27k_cl100k
|
[
"region:us"
] |
2023-10-10T03:09:43+00:00
|
{"dataset_info": {"features": [{"name": "base_commit", "dtype": "string"}, {"name": "hints_text", "dtype": "string"}, {"name": "created_at", "dtype": "string"}, {"name": "test_patch", "dtype": "string"}, {"name": "repo", "dtype": "string"}, {"name": "problem_statement", "dtype": "string"}, {"name": "version", "dtype": "string"}, {"name": "instance_id", "dtype": "string"}, {"name": "FAIL_TO_PASS", "dtype": "string"}, {"name": "PASS_TO_PASS", "dtype": "string"}, {"name": "environment_setup_commit", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "labels", "sequence": "int64"}, {"name": "patch", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 541825176, "num_examples": 2294}], "download_size": 235069451, "dataset_size": 541825176}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}]}
|
2023-10-10T18:32:10+00:00
|
[] |
[] |
TAGS
#region-us
|
### Dataset Summary
SWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.
### Supported Tasks and Leaderboards
SWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at URL
### Languages
The text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.
## Dataset Structure
### Data Instances
An example of a SWE-bench datum is as follows:
More Information needed
|
[
"### Dataset Summary\nSWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.",
"### Supported Tasks and Leaderboards\nSWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at URL",
"### Languages\n\nThe text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.",
"## Dataset Structure",
"### Data Instances\nAn example of a SWE-bench datum is as follows:\n\n\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"### Dataset Summary\nSWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.",
"### Supported Tasks and Leaderboards\nSWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at URL",
"### Languages\n\nThe text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.",
"## Dataset Structure",
"### Data Instances\nAn example of a SWE-bench datum is as follows:\n\n\n\nMore Information needed"
] |
[
6,
68,
43,
30,
6,
24
] |
[
"passage: TAGS\n#region-us \n### Dataset Summary\nSWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.### Supported Tasks and Leaderboards\nSWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at URL### Languages\n\nThe text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.## Dataset Structure### Data Instances\nAn example of a SWE-bench datum is as follows:\n\n\n\nMore Information needed"
] |
5a7572b75cee5e58c6e6cb33d792b53725bb64cc
|
### Dataset Summary
SWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.
### Supported Tasks and Leaderboards
SWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at www.swebench.com
### Languages
The text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.
## Dataset Structure
### Data Instances
An example of a SWE-bench datum is as follows:
```
instance_id: (str) - A formatted instance identifier, usually as repo_owner__repo_name-PR-number.
patch: (str) - The gold patch, the patch generated by the PR (minus test-related code), that resolved the issue.
repo: (str) - The repository owner/name identifier from GitHub.
base_commit: (str) - The commit hash of the repository representing the HEAD of the repository before the solution PR is applied.
hints_text: (str) - Comments made on the issue prior to the creation of the solution PR’s first commit creation date.
created_at: (str) - The creation date of the pull request.
test_patch: (str) - A test-file patch that was contributed by the solution PR.
Problem_statement: (str) - The issue title and body.
Version: (str) - Installation version to use for running evaluation.
environment_setup_commit: (str) - commit hash to use for environment setup and installation.
FAIL_TO_PASS: (str) - A json list of strings that represent the set of tests resolved by the PR and tied to the issue resolution.
PASS_TO_PASS: (str) - A json list of strings that represent tests that should pass before and after the PR application.
text: (str) - The generated text according to the retrieval criterion and the style-2 prompt found in [github:SWE-bench](https://github.com/princeton-nlp/SWE-bench).
input_ids: (List[int]) - The llama tokens for each text.
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
princeton-nlp/SWE-bench_bm25_50k_llama
|
[
"region:us"
] |
2023-10-10T03:10:21+00:00
|
{"dataset_info": {"features": [{"name": "base_commit", "dtype": "string"}, {"name": "hints_text", "dtype": "string"}, {"name": "created_at", "dtype": "string"}, {"name": "test_patch", "dtype": "string"}, {"name": "repo", "dtype": "string"}, {"name": "problem_statement", "dtype": "string"}, {"name": "version", "dtype": "string"}, {"name": "instance_id", "dtype": "string"}, {"name": "FAIL_TO_PASS", "dtype": "string"}, {"name": "PASS_TO_PASS", "dtype": "string"}, {"name": "environment_setup_commit", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "labels", "sequence": "int64"}, {"name": "patch", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 827697217, "num_examples": 2294}], "download_size": 319302210, "dataset_size": 827697217}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}]}
|
2023-10-10T18:33:17+00:00
|
[] |
[] |
TAGS
#region-us
|
### Dataset Summary
SWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.
### Supported Tasks and Leaderboards
SWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at URL
### Languages
The text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.
## Dataset Structure
### Data Instances
An example of a SWE-bench datum is as follows:
More Information needed
|
[
"### Dataset Summary\nSWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.",
"### Supported Tasks and Leaderboards\nSWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at URL",
"### Languages\n\nThe text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.",
"## Dataset Structure",
"### Data Instances\nAn example of a SWE-bench datum is as follows:
\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"### Dataset Summary\nSWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.",
"### Supported Tasks and Leaderboards\nSWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at URL",
"### Languages\n\nThe text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.",
"## Dataset Structure",
"### Data Instances\nAn example of a SWE-bench datum is as follows:
\nMore Information needed"
] |
[
6,
68,
43,
30,
6,
24
] |
[
"passage: TAGS\n#region-us \n### Dataset Summary\nSWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.### Supported Tasks and Leaderboards\nSWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at URL### Languages\n\nThe text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.## Dataset Structure### Data Instances\nAn example of a SWE-bench datum is as follows:
\nMore Information needed"
] |
790562ead0782876a1e1583fbdde20f1bca5a574
|
### Dataset Summary
SWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.
### Supported Tasks and Leaderboards
SWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at www.swebench.com
### Languages
The text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.
## Dataset Structure
### Data Instances
An example of a SWE-bench datum is as follows:
```
instance_id: (str) - A formatted instance identifier, usually as repo_owner__repo_name-PR-number.
patch: (str) - The gold patch, the patch generated by the PR (minus test-related code), that resolved the issue.
repo: (str) - The repository owner/name identifier from GitHub.
base_commit: (str) - The commit hash of the repository representing the HEAD of the repository before the solution PR is applied.
hints_text: (str) - Comments made on the issue prior to the creation of the solution PR’s first commit creation date.
created_at: (str) - The creation date of the pull request.
test_patch: (str) - A test-file patch that was contributed by the solution PR.
problem_statement: (str) - The issue title and body.
version: (str) - Installation version to use for running evaluation.
environment_setup_commit: (str) - commit hash to use for environment setup and installation.
FAIL_TO_PASS: (str) - A json list of strings that represent the set of tests resolved by the PR and tied to the issue resolution.
PASS_TO_PASS: (str) - A json list of strings that represent tests that should pass before and after the PR application.
text: (str) - The generated text according to the retrieval criterion and the style-2 prompt found in [github:SWE-bench](https://github.com/princeton-nlp/SWE-bench).
input_ids: (List[int]) - The llama tokens for each text.
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
princeton-nlp/SWE-bench_oracle_llama
|
[
"region:us"
] |
2023-10-10T03:10:48+00:00
|
{"dataset_info": {"features": [{"name": "base_commit", "dtype": "string"}, {"name": "hints_text", "dtype": "string"}, {"name": "created_at", "dtype": "string"}, {"name": "test_patch", "dtype": "string"}, {"name": "repo", "dtype": "string"}, {"name": "problem_statement", "dtype": "string"}, {"name": "version", "dtype": "string"}, {"name": "instance_id", "dtype": "string"}, {"name": "FAIL_TO_PASS", "dtype": "string"}, {"name": "PASS_TO_PASS", "dtype": "string"}, {"name": "environment_setup_commit", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "labels", "sequence": "int64"}, {"name": "patch", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 550241818, "num_examples": 2294}], "download_size": 199730762, "dataset_size": 550241818}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}]}
|
2023-10-17T12:59:03+00:00
|
[] |
[] |
TAGS
#region-us
|
### Dataset Summary
SWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.
### Supported Tasks and Leaderboards
SWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at URL
### Languages
The text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.
## Dataset Structure
### Data Instances
An example of a SWE-bench datum is as follows:
More Information needed
|
[
"### Dataset Summary\nSWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.",
"### Supported Tasks and Leaderboards\nSWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at URL",
"### Languages\n\nThe text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.",
"## Dataset Structure",
"### Data Instances\nAn example of a SWE-bench datum is as follows:\n\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"### Dataset Summary\nSWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.",
"### Supported Tasks and Leaderboards\nSWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at URL",
"### Languages\n\nThe text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.",
"## Dataset Structure",
"### Data Instances\nAn example of a SWE-bench datum is as follows:\n\n\nMore Information needed"
] |
[
6,
68,
43,
30,
6,
24
] |
[
"passage: TAGS\n#region-us \n### Dataset Summary\nSWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.### Supported Tasks and Leaderboards\nSWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at URL### Languages\n\nThe text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.## Dataset Structure### Data Instances\nAn example of a SWE-bench datum is as follows:\n\n\nMore Information needed"
] |
878195b4213f7de5ddd50acbd40e0d4710259a19
|
### Dataset Summary
SWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.
### Supported Tasks and Leaderboards
SWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at www.swebench.com
### Languages
The text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.
## Dataset Structure
### Data Instances
An example of a SWE-bench datum is as follows:
```
instance_id: (str) - A formatted instance identifier, usually as repo_owner__repo_name-PR-number.
patch: (str) - The gold patch, the patch generated by the PR (minus test-related code), that resolved the issue.
repo: (str) - The repository owner/name identifier from GitHub.
base_commit: (str) - The commit hash of the repository representing the HEAD of the repository before the solution PR is applied.
hints_text: (str) - Comments made on the issue prior to the creation of the solution PR’s first commit creation date.
created_at: (str) - The creation date of the pull request.
test_patch: (str) - A test-file patch that was contributed by the solution PR.
problem_statement: (str) - The issue title and body.
version: (str) - Installation version to use for running evaluation.
environment_setup_commit: (str) - commit hash to use for environment setup and installation.
FAIL_TO_PASS: (str) - A json list of strings that represent the set of tests resolved by the PR and tied to the issue resolution.
PASS_TO_PASS: (str) - A json list of strings that represent tests that should pass before and after the PR application.
text: (str) - The generated text according to the retrieval criterion and the style-2 prompt found in [github:SWE-bench](https://github.com/princeton-nlp/SWE-bench).
input_ids: (List[int]) - The llama tokens for each text.
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
princeton-nlp/SWE-bench_oracle_cl100k
|
[
"region:us"
] |
2023-10-10T03:11:10+00:00
|
{"dataset_info": {"features": [{"name": "base_commit", "dtype": "string"}, {"name": "hints_text", "dtype": "string"}, {"name": "created_at", "dtype": "string"}, {"name": "test_patch", "dtype": "string"}, {"name": "repo", "dtype": "string"}, {"name": "problem_statement", "dtype": "string"}, {"name": "version", "dtype": "string"}, {"name": "instance_id", "dtype": "string"}, {"name": "FAIL_TO_PASS", "dtype": "string"}, {"name": "PASS_TO_PASS", "dtype": "string"}, {"name": "environment_setup_commit", "dtype": "string"}, {"name": "text", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "labels", "sequence": "int64"}, {"name": "patch", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 461652030, "num_examples": 2294}], "download_size": 189736456, "dataset_size": 461652030}, "configs": [{"config_name": "default", "data_files": [{"split": "test", "path": "data/test-*"}]}]}
|
2023-10-17T12:59:27+00:00
|
[] |
[] |
TAGS
#region-us
|
### Dataset Summary
SWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.
### Supported Tasks and Leaderboards
SWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at URL
### Languages
The text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.
## Dataset Structure
### Data Instances
An example of a SWE-bench datum is as follows:
More Information needed
|
[
"### Dataset Summary\nSWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.",
"### Supported Tasks and Leaderboards\nSWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at URL",
"### Languages\n\nThe text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.",
"## Dataset Structure",
"### Data Instances\nAn example of a SWE-bench datum is as follows:\n\n\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"### Dataset Summary\nSWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.",
"### Supported Tasks and Leaderboards\nSWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at URL",
"### Languages\n\nThe text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.",
"## Dataset Structure",
"### Data Instances\nAn example of a SWE-bench datum is as follows:\n\n\n\nMore Information needed"
] |
[
6,
68,
43,
30,
6,
24
] |
[
"passage: TAGS\n#region-us \n### Dataset Summary\nSWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.### Supported Tasks and Leaderboards\nSWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at URL### Languages\n\nThe text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.## Dataset Structure### Data Instances\nAn example of a SWE-bench datum is as follows:\n\n\n\nMore Information needed"
] |
66e2839d219d39b4f3d6e39b56de4600f5f5c3f5
|
# Dataset Card for Evaluation run of maywell/Synatra-V0.1-7B
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/maywell/Synatra-V0.1-7B
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [maywell/Synatra-V0.1-7B](https://huggingface.co/maywell/Synatra-V0.1-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_maywell__Synatra-V0.1-7B",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-23T11:25:13.204412](https://huggingface.co/datasets/open-llm-leaderboard/details_maywell__Synatra-V0.1-7B/blob/main/results_2023-10-23T11-25-13.204412.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.32246224832214765,
"em_stderr": 0.004786806140711669,
"f1": 0.3963055788590608,
"f1_stderr": 0.004634063813539812,
"acc": 0.46089483255174657,
"acc_stderr": 0.011702308149823175
},
"harness|drop|3": {
"em": 0.32246224832214765,
"em_stderr": 0.004786806140711669,
"f1": 0.3963055788590608,
"f1_stderr": 0.004634063813539812
},
"harness|gsm8k|5": {
"acc": 0.19408642911296436,
"acc_stderr": 0.010893918308192417
},
"harness|winogrande|5": {
"acc": 0.7277032359905288,
"acc_stderr": 0.012510697991453932
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_maywell__Synatra-V0.1-7B
|
[
"region:us"
] |
2023-10-10T03:31:22+00:00
|
{"pretty_name": "Evaluation run of maywell/Synatra-V0.1-7B", "dataset_summary": "Dataset automatically created during the evaluation run of model [maywell/Synatra-V0.1-7B](https://huggingface.co/maywell/Synatra-V0.1-7B) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_maywell__Synatra-V0.1-7B\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-23T11:25:13.204412](https://huggingface.co/datasets/open-llm-leaderboard/details_maywell__Synatra-V0.1-7B/blob/main/results_2023-10-23T11-25-13.204412.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.32246224832214765,\n \"em_stderr\": 0.004786806140711669,\n \"f1\": 0.3963055788590608,\n \"f1_stderr\": 0.004634063813539812,\n \"acc\": 0.46089483255174657,\n \"acc_stderr\": 0.011702308149823175\n },\n \"harness|drop|3\": {\n \"em\": 0.32246224832214765,\n \"em_stderr\": 0.004786806140711669,\n \"f1\": 0.3963055788590608,\n \"f1_stderr\": 0.004634063813539812\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.19408642911296436,\n \"acc_stderr\": 0.010893918308192417\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7277032359905288,\n \"acc_stderr\": 0.012510697991453932\n }\n}\n```", "repo_url": "https://huggingface.co/maywell/Synatra-V0.1-7B", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|arc:challenge|25_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_23T11_25_13.204412", "path": ["**/details_harness|drop|3_2023-10-23T11-25-13.204412.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-23T11-25-13.204412.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_23T11_25_13.204412", "path": ["**/details_harness|gsm8k|5_2023-10-23T11-25-13.204412.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-23T11-25-13.204412.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hellaswag|10_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T04-30-58.971713.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T04-30-58.971713.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T04-30-58.971713.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_23T11_25_13.204412", "path": ["**/details_harness|winogrande|5_2023-10-23T11-25-13.204412.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-23T11-25-13.204412.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_10T04_30_58.971713", "path": ["results_2023-10-10T04-30-58.971713.parquet"]}, {"split": "2023_10_23T11_25_13.204412", "path": ["results_2023-10-23T11-25-13.204412.parquet"]}, {"split": "latest", "path": ["results_2023-10-23T11-25-13.204412.parquet"]}]}]}
|
2023-10-23T10:25:26+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of maywell/Synatra-V0.1-7B
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model maywell/Synatra-V0.1-7B on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-23T11:25:13.204412(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of maywell/Synatra-V0.1-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model maywell/Synatra-V0.1-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-23T11:25:13.204412(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of maywell/Synatra-V0.1-7B",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model maywell/Synatra-V0.1-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-23T11:25:13.204412(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
20,
31,
168,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of maywell/Synatra-V0.1-7B## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model maywell/Synatra-V0.1-7B on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-23T11:25:13.204412(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
df462cb05ee5b7dece20c34a2721d431d46725f6
|
# Dataset Card for "finance-alpaca_vi"
+ Origin dataset [finance-alpaca](https://huggingface.co/datasets/gbharti/finance-alpaca )
+ Translated into Vietnamese using OpenAI GPT3.5 API
|
infCapital/finance-alpaca_vi
|
[
"task_categories:question-answering",
"task_categories:text-generation",
"language:vi",
"license:apache-2.0",
"region:us"
] |
2023-10-10T03:35:34+00:00
|
{"language": ["vi"], "license": "apache-2.0", "task_categories": ["question-answering", "text-generation"], "dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 48252402, "num_examples": 66665}], "download_size": 24622108, "dataset_size": 48252402}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-10T03:40:29+00:00
|
[] |
[
"vi"
] |
TAGS
#task_categories-question-answering #task_categories-text-generation #language-Vietnamese #license-apache-2.0 #region-us
|
# Dataset Card for "finance-alpaca_vi"
+ Origin dataset finance-alpaca
+ Translated into Vietnamese using OpenAI GPT3.5 API
|
[
"# Dataset Card for \"finance-alpaca_vi\"\n+ Origin dataset finance-alpaca\n+ Translated into Vietnamese using OpenAI GPT3.5 API"
] |
[
"TAGS\n#task_categories-question-answering #task_categories-text-generation #language-Vietnamese #license-apache-2.0 #region-us \n",
"# Dataset Card for \"finance-alpaca_vi\"\n+ Origin dataset finance-alpaca\n+ Translated into Vietnamese using OpenAI GPT3.5 API"
] |
[
44,
36
] |
[
"passage: TAGS\n#task_categories-question-answering #task_categories-text-generation #language-Vietnamese #license-apache-2.0 #region-us \n# Dataset Card for \"finance-alpaca_vi\"\n+ Origin dataset finance-alpaca\n+ Translated into Vietnamese using OpenAI GPT3.5 API"
] |
758049d9fac16ad628d0f7c45525a15932f6750c
|
# Dataset Card for "music_vid_256"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
keonroohparvar/music_vid_256
|
[
"region:us"
] |
2023-10-10T03:40:33+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 93921778.139, "num_examples": 9179}], "download_size": 80840102, "dataset_size": 93921778.139}}
|
2023-10-10T03:40:44+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "music_vid_256"
More Information needed
|
[
"# Dataset Card for \"music_vid_256\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"music_vid_256\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"music_vid_256\"\n\nMore Information needed"
] |
719074a408af99f041cd3bc0411f9b362d35081c
|
# Dataset Card for "radio-llama2-5pct"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Rewcifer/radio-llama2-5pct
|
[
"region:us"
] |
2023-10-10T03:43:27+00:00
|
{"dataset_info": {"features": [{"name": "output", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "instruction", "dtype": "string"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10787742, "num_examples": 1000}], "download_size": 2502601, "dataset_size": 10787742}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-10T03:43:29+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "radio-llama2-5pct"
More Information needed
|
[
"# Dataset Card for \"radio-llama2-5pct\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"radio-llama2-5pct\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"radio-llama2-5pct\"\n\nMore Information needed"
] |
bcc55e2385b1c7d2fc644f5622eddf7597e847ac
|
# Dataset Card for "childes_phones"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
michaelginn/childes_phones
|
[
"region:us"
] |
2023-10-10T03:49:38+00:00
|
{"dataset_info": {"features": [{"name": "line", "dtype": "string"}, {"name": "file", "dtype": "string"}, {"name": "ipa", "dtype": "string"}, {"name": "ipa_syll", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 2792238, "num_examples": 28466}], "download_size": 1400857, "dataset_size": 2792238}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-11-30T20:14:55+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "childes_phones"
More Information needed
|
[
"# Dataset Card for \"childes_phones\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"childes_phones\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"childes_phones\"\n\nMore Information needed"
] |
7a48a136817b1c5e1cdf46413546b74f79906995
|
### Dataset Summary
SWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.
The dataset was released as part of [SWE-bench: Can Language Models Resolve Real-World GitHub Issues?](https://arxiv.org/abs/2310.06770)
## Want to run inference now?
This dataset only contains the `problem_statement` (i.e. issue text) and the `base_commit` which can represents the state of the codebase before the issue has been resolved. If you want to run inference using the "Oracle" or BM25 retrieval settings mentioned in the paper, consider the following datasets.
[princeton-nlp/SWE-bench_oracle](https://huggingface.co/datasets/princeton-nlp/SWE-bench_oracle)
[princeton-nlp/SWE-bench_bm25_13K](https://huggingface.co/datasets/princeton-nlp/SWE-bench_bm25_13K)
[princeton-nlp/SWE-bench_bm25_27K](https://huggingface.co/datasets/princeton-nlp/SWE-bench_bm25_27K)
[princeton-nlp/SWE-bench_bm25_40K](https://huggingface.co/datasets/princeton-nlp/SWE-bench_bm25_40K)
[princeton-nlp/SWE-bench_bm25_50k_llama](https://huggingface.co/datasets/princeton-nlp/SWE-bench_bm25_50k_llama)
### Supported Tasks and Leaderboards
SWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at www.swebench.com
### Languages
The text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.
## Dataset Structure
### Data Instances
An example of a SWE-bench datum is as follows:
```
instance_id: (str) - A formatted instance identifier, usually as repo_owner__repo_name-PR-number.
patch: (str) - The gold patch, the patch generated by the PR (minus test-related code), that resolved the issue.
repo: (str) - The repository owner/name identifier from GitHub.
base_commit: (str) - The commit hash of the repository representing the HEAD of the repository before the solution PR is applied.
hints_text: (str) - Comments made on the issue prior to the creation of the solution PR’s first commit creation date.
created_at: (str) - The creation date of the pull request.
test_patch: (str) - A test-file patch that was contributed by the solution PR.
problem_statement: (str) - The issue title and body.
version: (str) - Installation version to use for running evaluation.
environment_setup_commit: (str) - commit hash to use for environment setup and installation.
FAIL_TO_PASS: (str) - A json list of strings that represent the set of tests resolved by the PR and tied to the issue resolution.
PASS_TO_PASS: (str) - A json list of strings that represent tests that should pass before and after the PR application.
```
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
princeton-nlp/SWE-bench
|
[
"arxiv:2310.06770",
"region:us"
] |
2023-10-10T03:56:03+00:00
|
{"dataset_info": {"features": [{"name": "repo", "dtype": "string"}, {"name": "instance_id", "dtype": "string"}, {"name": "base_commit", "dtype": "string"}, {"name": "patch", "dtype": "string"}, {"name": "test_patch", "dtype": "string"}, {"name": "problem_statement", "dtype": "string"}, {"name": "hints_text", "dtype": "string"}, {"name": "created_at", "dtype": "string"}, {"name": "version", "dtype": "string"}, {"name": "FAIL_TO_PASS", "dtype": "string"}, {"name": "PASS_TO_PASS", "dtype": "string"}, {"name": "environment_setup_commit", "dtype": "string"}], "splits": [{"name": "dev", "num_bytes": 4783179, "num_examples": 225}, {"name": "test", "num_bytes": 41880503, "num_examples": 2294}, {"name": "train", "num_bytes": 367610377, "num_examples": 19008}], "download_size": 119829492, "dataset_size": 414274059}, "configs": [{"config_name": "default", "data_files": [{"split": "dev", "path": "data/dev-*"}, {"split": "test", "path": "data/test-*"}, {"split": "train", "path": "data/train-*"}]}]}
|
2023-11-17T19:02:27+00:00
|
[
"2310.06770"
] |
[] |
TAGS
#arxiv-2310.06770 #region-us
|
### Dataset Summary
SWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.
The dataset was released as part of SWE-bench: Can Language Models Resolve Real-World GitHub Issues?
## Want to run inference now?
This dataset only contains the 'problem_statement' (i.e. issue text) and the 'base_commit' which can represents the state of the codebase before the issue has been resolved. If you want to run inference using the "Oracle" or BM25 retrieval settings mentioned in the paper, consider the following datasets.
princeton-nlp/SWE-bench_oracle
princeton-nlp/SWE-bench_bm25_13K
princeton-nlp/SWE-bench_bm25_27K
princeton-nlp/SWE-bench_bm25_40K
princeton-nlp/SWE-bench_bm25_50k_llama
### Supported Tasks and Leaderboards
SWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at URL
### Languages
The text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.
## Dataset Structure
### Data Instances
An example of a SWE-bench datum is as follows:
More Information needed
|
[
"### Dataset Summary\nSWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.\n\nThe dataset was released as part of SWE-bench: Can Language Models Resolve Real-World GitHub Issues?",
"## Want to run inference now?\nThis dataset only contains the 'problem_statement' (i.e. issue text) and the 'base_commit' which can represents the state of the codebase before the issue has been resolved. If you want to run inference using the \"Oracle\" or BM25 retrieval settings mentioned in the paper, consider the following datasets.\n\nprinceton-nlp/SWE-bench_oracle\n\nprinceton-nlp/SWE-bench_bm25_13K\n\nprinceton-nlp/SWE-bench_bm25_27K\n\nprinceton-nlp/SWE-bench_bm25_40K\n\nprinceton-nlp/SWE-bench_bm25_50k_llama",
"### Supported Tasks and Leaderboards\nSWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at URL",
"### Languages\n\nThe text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.",
"## Dataset Structure",
"### Data Instances\nAn example of a SWE-bench datum is as follows:\n\n\n\nMore Information needed"
] |
[
"TAGS\n#arxiv-2310.06770 #region-us \n",
"### Dataset Summary\nSWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.\n\nThe dataset was released as part of SWE-bench: Can Language Models Resolve Real-World GitHub Issues?",
"## Want to run inference now?\nThis dataset only contains the 'problem_statement' (i.e. issue text) and the 'base_commit' which can represents the state of the codebase before the issue has been resolved. If you want to run inference using the \"Oracle\" or BM25 retrieval settings mentioned in the paper, consider the following datasets.\n\nprinceton-nlp/SWE-bench_oracle\n\nprinceton-nlp/SWE-bench_bm25_13K\n\nprinceton-nlp/SWE-bench_bm25_27K\n\nprinceton-nlp/SWE-bench_bm25_40K\n\nprinceton-nlp/SWE-bench_bm25_50k_llama",
"### Supported Tasks and Leaderboards\nSWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at URL",
"### Languages\n\nThe text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.",
"## Dataset Structure",
"### Data Instances\nAn example of a SWE-bench datum is as follows:\n\n\n\nMore Information needed"
] |
[
15,
97,
178,
43,
30,
6,
24
] |
[
"passage: TAGS\n#arxiv-2310.06770 #region-us \n### Dataset Summary\nSWE-bench is a dataset that tests systems’ ability to solve GitHub issues automatically. The dataset collects 2,294 Issue-Pull Request pairs from 12 popular Python. Evaluation is performed by unit test verification using post-PR behavior as the reference solution.\n\nThe dataset was released as part of SWE-bench: Can Language Models Resolve Real-World GitHub Issues?## Want to run inference now?\nThis dataset only contains the 'problem_statement' (i.e. issue text) and the 'base_commit' which can represents the state of the codebase before the issue has been resolved. If you want to run inference using the \"Oracle\" or BM25 retrieval settings mentioned in the paper, consider the following datasets.\n\nprinceton-nlp/SWE-bench_oracle\n\nprinceton-nlp/SWE-bench_bm25_13K\n\nprinceton-nlp/SWE-bench_bm25_27K\n\nprinceton-nlp/SWE-bench_bm25_40K\n\nprinceton-nlp/SWE-bench_bm25_50k_llama### Supported Tasks and Leaderboards\nSWE-bench proposes a new task: issue resolution provided a full repository and GitHub issue. The leaderboard can be found at URL### Languages\n\nThe text of the dataset is primarily English, but we make no effort to filter or otherwise clean based on language type.## Dataset Structure### Data Instances\nAn example of a SWE-bench datum is as follows:\n\n\n\nMore Information needed"
] |
8b2b2bd4233140705f1f7bb48411b49cd188d89c
|
**Data Source**<br>
[Kaggle Medical Speech, Transcription, and Intent](https://www.kaggle.com/datasets/paultimothymooney/medical-speech-transcription-and-intent "Visit Original Dataset Page on Kaggle")<br>
**Context**<br>
>8.5 hours of audio utterances paired with text for common medical symptoms.<br>
**Content**<br>
>This data contains thousands of audio utterances for common medical symptoms like “knee pain” or “headache,” totaling more than 8 hours in aggregate. Each utterance was created by individual human contributors based on a given symptom. These audio snippets can be used to train conversational agents in the medical field.<br>
>
>This Figure Eight dataset was created via a multi-job workflow. The first involved contributors writing text phrases to describe symptoms given. For example, for “headache,” a contributor might write “I need help with my migraines.” Subsequent jobs captured audio utterances for accepted text strings.<br>
>
>Note that some of the labels are incorrect and some of the audio files have poor quality. I would recommend cleaning the dataset before training any machine learning models.<br>
>
>This dataset contains both the audio utterances and corresponding transcriptions.<br>
**What's new**<br>
*The data is clean from all columns except for the file_path and phrase<br>
*All Audios are loaded into the DatasetDict as an 1D array, float32<br>
*All Audios are resampled into 16K<br>
*The new structure :<br>
train = {<br>
'audio': {<br>
'path': file_path, *the mp3 files is not included here, please visit the kaggle to dowload em*<br>
'array': waveform_np,<br>
'sampling_rate': 16000<br>
},<br>
'sentence': the text transcription<br>
}
|
Hani89/medical_asr_recording_dataset
|
[
"task_categories:automatic-speech-recognition",
"size_categories:1K<n<10K",
"language:en",
"license:apache-2.0",
"medical",
"region:us"
] |
2023-10-10T04:13:04+00:00
|
{"language": ["en"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["automatic-speech-recognition"], "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "audio", "struct": [{"name": "array", "sequence": {"sequence": "float32"}}, {"name": "path", "dtype": "string"}, {"name": "sampling_rate", "dtype": "int64"}]}, {"name": "sentence", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 3128740048, "num_examples": 5328}, {"name": "test", "num_bytes": 776455056, "num_examples": 1333}], "download_size": 3882364624, "dataset_size": 3905195104}, "tags": ["medical"]}
|
2023-10-10T04:41:22+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-automatic-speech-recognition #size_categories-1K<n<10K #language-English #license-apache-2.0 #medical #region-us
|
Data Source<br>
Kaggle Medical Speech, Transcription, and Intent<br>
Context<br>
>8.5 hours of audio utterances paired with text for common medical symptoms.<br>
Content<br>
>This data contains thousands of audio utterances for common medical symptoms like “knee pain” or “headache,” totaling more than 8 hours in aggregate. Each utterance was created by individual human contributors based on a given symptom. These audio snippets can be used to train conversational agents in the medical field.<br>
>
>This Figure Eight dataset was created via a multi-job workflow. The first involved contributors writing text phrases to describe symptoms given. For example, for “headache,” a contributor might write “I need help with my migraines.” Subsequent jobs captured audio utterances for accepted text strings.<br>
>
>Note that some of the labels are incorrect and some of the audio files have poor quality. I would recommend cleaning the dataset before training any machine learning models.<br>
>
>This dataset contains both the audio utterances and corresponding transcriptions.<br>
What's new<br>
*The data is clean from all columns except for the file_path and phrase<br>
*All Audios are loaded into the DatasetDict as an 1D array, float32<br>
*All Audios are resampled into 16K<br>
*The new structure :<br>
train = {<br>
'audio': {<br>
'path': file_path, *the mp3 files is not included here, please visit the kaggle to dowload em*<br>
'array': waveform_np,<br>
'sampling_rate': 16000<br>
},<br>
'sentence': the text transcription<br>
}
|
[] |
[
"TAGS\n#task_categories-automatic-speech-recognition #size_categories-1K<n<10K #language-English #license-apache-2.0 #medical #region-us \n"
] |
[
49
] |
[
"passage: TAGS\n#task_categories-automatic-speech-recognition #size_categories-1K<n<10K #language-English #license-apache-2.0 #medical #region-us \n"
] |
b9442a538c8c4364c889bb85235dff6a3916711a
|
# Dataset Card for "orca-gpt4-subset-1k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
caisarl76/orca-gpt4-subset-1k
|
[
"region:us"
] |
2023-10-10T04:15:48+00:00
|
{"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "system_prompt", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "response", "dtype": "string"}, {"name": "__index_level_0__", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 1703047, "num_examples": 1000}], "download_size": 947311, "dataset_size": 1703047}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-10T04:15:52+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "orca-gpt4-subset-1k"
More Information needed
|
[
"# Dataset Card for \"orca-gpt4-subset-1k\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"orca-gpt4-subset-1k\"\n\nMore Information needed"
] |
[
6,
20
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"orca-gpt4-subset-1k\"\n\nMore Information needed"
] |
a2b2f8e0544c2bdb948677c355cf092bb48e8c2e
|
# Chatbot_data.
Chatbot_data_for_Korean v1.0
## Data description.
인공데이터입니다. 일부 이별과 관련된 질문에서 다음카페 "사랑보다 아름다운 실연( http://cafe116.daum.net/_c21_/home?grpid=1bld )"에서 자주 나오는 이야기들을 참고하여 제작하였습니다.
가령 "이별한 지 열흘(또는 100일) 되었어요"라는 질문에 챗봇이 위로한다는 취지로 답변을 작성하였습니다.
1. 챗봇 트레이닝용 문답 페어 11,876개
2. 일상다반사 0, 이별(부정) 1, 사랑(긍정) 2로 레이블링
#인용
Youngsook Song.(2018). Chatbot_data_for_Korean v1.0)[Online]. Available : https://github.com/songys/Chatbot_data (downloaded 2022. June. 29.)
|
songys/ChatbotData
|
[
"license:cc-by-sa-4.0",
"region:us"
] |
2023-10-10T04:22:01+00:00
|
{"license": "cc-by-sa-4.0"}
|
2023-10-10T04:24:31+00:00
|
[] |
[] |
TAGS
#license-cc-by-sa-4.0 #region-us
|
# Chatbot_data.
Chatbot_data_for_Korean v1.0
## Data description.
인공데이터입니다. 일부 이별과 관련된 질문에서 다음카페 "사랑보다 아름다운 실연( URL )"에서 자주 나오는 이야기들을 참고하여 제작하였습니다.
가령 "이별한 지 열흘(또는 100일) 되었어요"라는 질문에 챗봇이 위로한다는 취지로 답변을 작성하였습니다.
1. 챗봇 트레이닝용 문답 페어 11,876개
2. 일상다반사 0, 이별(부정) 1, 사랑(긍정) 2로 레이블링
#인용
Youngsook Song.(2018). Chatbot_data_for_Korean v1.0)[Online]. Available : URL (downloaded 2022. June. 29.)
|
[
"# Chatbot_data. \nChatbot_data_for_Korean v1.0",
"## Data description. \n\n인공데이터입니다. 일부 이별과 관련된 질문에서 다음카페 \"사랑보다 아름다운 실연( URL )\"에서 자주 나오는 이야기들을 참고하여 제작하였습니다. \n가령 \"이별한 지 열흘(또는 100일) 되었어요\"라는 질문에 챗봇이 위로한다는 취지로 답변을 작성하였습니다. \n\n\n1. 챗봇 트레이닝용 문답 페어 11,876개 \n2. 일상다반사 0, 이별(부정) 1, 사랑(긍정) 2로 레이블링"
] |
[
"TAGS\n#license-cc-by-sa-4.0 #region-us \n",
"# Chatbot_data. \nChatbot_data_for_Korean v1.0",
"## Data description. \n\n인공데이터입니다. 일부 이별과 관련된 질문에서 다음카페 \"사랑보다 아름다운 실연( URL )\"에서 자주 나오는 이야기들을 참고하여 제작하였습니다. \n가령 \"이별한 지 열흘(또는 100일) 되었어요\"라는 질문에 챗봇이 위로한다는 취지로 답변을 작성하였습니다. \n\n\n1. 챗봇 트레이닝용 문답 페어 11,876개 \n2. 일상다반사 0, 이별(부정) 1, 사랑(긍정) 2로 레이블링"
] |
[
17,
17,
115
] |
[
"passage: TAGS\n#license-cc-by-sa-4.0 #region-us \n# Chatbot_data. \nChatbot_data_for_Korean v1.0## Data description. \n\n인공데이터입니다. 일부 이별과 관련된 질문에서 다음카페 \"사랑보다 아름다운 실연( URL )\"에서 자주 나오는 이야기들을 참고하여 제작하였습니다. \n가령 \"이별한 지 열흘(또는 100일) 되었어요\"라는 질문에 챗봇이 위로한다는 취지로 답변을 작성하였습니다. \n\n\n1. 챗봇 트레이닝용 문답 페어 11,876개 \n2. 일상다반사 0, 이별(부정) 1, 사랑(긍정) 2로 레이블링"
] |
8f669a142e92fd607cfe19fb54b759932e162acb
|
# AutoTrain Dataset for project: demo-resnet50-test
## Dataset Description
This dataset has been automatically processed by AutoTrain for project demo-resnet50-test.
### Languages
The BCP-47 code for the dataset's language is unk.
## Dataset Structure
### Data Instances
A sample from this dataset looks as follows:
```json
[
{
"image": "<600x600 RGB PIL image>",
"target": 0
},
{
"image": "<600x799 RGB PIL image>",
"target": 1
}
]
```
### Dataset Fields
The dataset has the following fields (also called "features"):
```json
{
"image": "Image(decode=True, id=None)",
"target": "ClassLabel(names=['000', '005', '033'], id=None)"
}
```
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
| Split name | Num samples |
| ------------ | ------------------- |
| train | 288 |
| valid | 75 |
|
SRGui/simple_cn_food_demo
|
[
"task_categories:image-classification",
"region:us"
] |
2023-10-10T04:38:33+00:00
|
{"task_categories": ["image-classification"]}
|
2023-10-10T04:46:24+00:00
|
[] |
[] |
TAGS
#task_categories-image-classification #region-us
|
AutoTrain Dataset for project: demo-resnet50-test
=================================================
Dataset Description
-------------------
This dataset has been automatically processed by AutoTrain for project demo-resnet50-test.
### Languages
The BCP-47 code for the dataset's language is unk.
Dataset Structure
-----------------
### Data Instances
A sample from this dataset looks as follows:
### Dataset Fields
The dataset has the following fields (also called "features"):
### Dataset Splits
This dataset is split into a train and validation split. The split sizes are as follow:
|
[
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
[
"TAGS\n#task_categories-image-classification #region-us \n",
"### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nA sample from this dataset looks as follows:",
"### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):",
"### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
[
17,
27,
17,
23,
27
] |
[
"passage: TAGS\n#task_categories-image-classification #region-us \n### Languages\n\n\nThe BCP-47 code for the dataset's language is unk.\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nA sample from this dataset looks as follows:### Dataset Fields\n\n\nThe dataset has the following fields (also called \"features\"):### Dataset Splits\n\n\nThis dataset is split into a train and validation split. The split sizes are as follow:"
] |
df7a66cbbc202f166ce4dab9d2f3576f84310336
|
# HRC: Building a human rights corpus for interactive generation models
#대화형 생성 모델을 위한 인권코퍼스 구축
## 참조 데이터
- 대한민국 [국가인권위원회](https://case.humanrights.go.kr/dici/diciList.do)의 결정례와 상담사례 참조
- 문체 변경과 질의 응답으로 변경하기 위해서 전후 맥락을 고려한 예시문을 만들고 GPT-3.5-turbo 을 이용하여 원샷 학습후 문답 생성
## 데이터 구조
- 데이터 구조 : source_copus---counsel.jsonl
---decision.jsonl
humane_right_copus_v1.jsonl
## 프롬프트 예시
```
[상담례 prompt]
주어진 상담 문서를 자연스러운 질문, 답변 형태로 변형해 주세요.
답변이 끝나면 '#####'를 작성해 주세요.
반드시 원본의 답변 내용을 기반으로 답변해야 합니다.
질문은 최대한 간결하게 작성해 주세요.
#####
상담 내용: 시청 앞 광장에서 노동조합이 기자회견을 하고 있습니다. 그런데 경찰 세 개 중대가 기자회견장을 에워싸고 집회를 해산하라는 방송을 계속 내보내고 있습니다. 우리는 기자회견을 하고 있으며 설령 집회라 하더라도 평화적으로 하는 집회에 대해 해산 명령을 하는 것은 부당합니다.
원본 상담 답변: 신고를 하지 아니하였다는 이유만으로 개최가 허용되지 않는 집회 내지 시위라고 단정할 수 없습니다. 기자회견이 개인 발언 및 구호 제창 등 미신고 옥외집회의 성격을 띠었다고 하더라도 헌법상 표현의 자유와 집회 시위의 자유라는 민주주의의 근간을 이루는 국민의 기본권이 최대한 보장되도록 해야 합니다. 참고로 대법원(2012도9776)은 “기자회견이 집회에 해당한다 하더라도, 해산명령불응죄가 성립하려면 집회로 인해 타인의 법익이나 공공의 안녕질서에 대한 직접적인 위험이 명백하게 초래된 경우에 해당”하는 것으로 판단한 바 있습니다. 다만, 기자회견이라는 이름으로 개최되고, 일부 기자회견의 성격이 가미되었다고 하더라도 집회로서의 성격을 부인할 수 없다면 신고의 의무까지 면제된다고 보기는 어려울 수 있습니다.
간결한 질문: 시청 앞 광장에서 노동조합이 기자회견을 진행 중입니다. 그런데 경찰이 기자회견장을 둘러싸고 해산하라는 방송을 내보내고 있습니다. 이런 상황에서 경찰의 해산 명령은 정당한 것인가요?
답변: 신고를 하지 않은 집회가 허용되지 않는다는 단순한 이유만으로 기자회견을 미신고 옥외집회로 판단하는 것은 부적절합니다. 기자회견도 표현의 자유와 집회 시위의 자유, 민주주의의 핵심적인 국민의 기본권에 해당하므로 최대한 보장되어야 합니다. 대법원의 판례(2012도9776)에 따르면, "기자회견이 집회에 해당한다 해도, 그 집회로 인해 다른 사람의 법적 이익이나 공공 안녕질서에 대한 명백한 위험이 발생해야 해산명령불응죄가 성립한다"고 하였습니다. 그렇지만, 기자회견이 집회의 성격을 함께 갖는 경우 신고 의무를 면제받는 것은 아닙니다.
#####
상담 내용: {}
원본 상담 답변: {}
간결한 질문:
[결정례 prompt]
주어진 요청, 근거, 결론을 참고하여, 새로운 법률 관련 질문과 답변을 만들어주세요.
답변은 500자 이내로 작성해주세요.
반드시 주어진 자료의 사실을 활용해야 합니다.
답변 끝나면, '*****' 를 작성해주세요.
*****
요청:
진정인은 사회복무요원으로 피진정기관에서 업무 보조 및 폐의약품 수거 일을 하였다. 2021. 7. 26.폐의약품 수거를 위해 진정인의 자전거를 타고 인도 위를 지나다 뛰어오던 행인을 치는 교통사고를 내어 벌금형40만을 선고받았다.교통사고 피해자는 전치 6주 진단을 받았고, 1,30만원가량의 국가배상 청구를 신청한 상태이며, 피진정인은 진정인에게중과실 책임이 있기 때문에 구상권을 청구할 수 있다고 한다.진정인이 인도에서 자전거를 운행한 것은 잘못이지만, 공무 중 발생한 사고에 대해 사회복무요원이 모든 책임을 지는 것은 부당하다. 피진정기관이 진정인에게 구상권을 행사하지 않도록 도와주기 바란다.
근거:
1. 진정인이 자전거로 인도를 횡단하는 등 중과실 책임이 있긴 하나, 사전에 피진정인이 복무관리기관의 장으로서 주의 의무를 다하였다면 진정인이 교통사고에 이르지 않았을 수 있고, 20대 초반의 사회초년생이 벌금을 비롯해 고액의 손해배상 부담을 지는 상황을 피할 수 있었을 것임. 2. 사회복무요원은 병역의무를 이행하고자 국가기관, 지방자치단체 등에 소속되어 공익목적 업무 수행 차원에서 사회서비스 및 행정업무에 복무하고 있는 자이고, 「병역법」 제31조에서도 사회복무요원의 직무상 행위를 공무수행으로 규정하고 있는 바, 공무수행 중 발생한 사고에 대하여 사회복무요원을 공무원 등과 달리 대우할 합리적인 이유가 없음. 따라서 법적 지원체계 마련이 필요함. 3. 진정 취지가 구상권을 행사하지 않도록 해달라는 요청이므로, 이는 헌법상 권리 침해 여부를 판단할 사안이 아님.
결론:
주문 1 : 1. 광명시장에게, 안전사고 위험이 큰 분야에 소속 사회복무요원을 가급적 배치하지 않도록 하고, 배치가 불가피할 때에는 사전에 안전교육을 철저히 실시하도록 의견을 표명합니다. 주문 2 : 2. 병무청장에게, 사회복무요원의 복무 관련 사고 발생 시 초기단계부터 지원하고 민형사상 부담을 최소화하는 지원체계를 마련하도록 의견을 표명합니다. 주문 3 : 3. 이 사건 진정은 각하합니다.
위를 참고한 새로운 질문과 답변
질문: 사회복무요원이 공무 중 발생한 교통사고로 인한 피해자에 대한 구상권을 행사할 수 있는지에 대한 법적 근거가 어떻게 되는지 알려주세요.
답변: 사회복무요원이 공무 중 발생한 교통사고로 인한 피해자에 대한 구상권 여부는 법적으로 인정됩니다. 이는 병역법 제31조에 따라 사회복무요원의 직무를 공무수행으로 규정하고 있으며, 중과실 책임이 있더라도 사전에 안전 조치를 취하거나 교통법규를 준수해 사고를 예방할 의무가 있음을 의미합니다. 따라서 피해자는 피사회복무요원에 대한 손해배상을 청구할 수 있습니다.
*****
요청:
{}
근거:
{}
결론:
{}
위를 참고한 새로운 질문과 답변
질문:
```
## 인용
```
@inproceedings{song2023},
author = {송영숙 and 심상진 and 김성현},
title = {대화형 생성 모델을 위한 인권 코퍼스 구축},
booktitle = {한글 및 한국어 정보처리 학술대회 발표 예정)},
year = {2023},
publisher = {한글 및 한국어 정보처리 학회}
}
```
|
songys/Ko_humane_right_copus
|
[
"license:cc-by-sa-3.0",
"region:us"
] |
2023-10-10T04:45:39+00:00
|
{"license": "cc-by-sa-3.0"}
|
2023-10-10T04:47:59+00:00
|
[] |
[] |
TAGS
#license-cc-by-sa-3.0 #region-us
|
# HRC: Building a human rights corpus for interactive generation models
#대화형 생성 모델을 위한 인권코퍼스 구축
## 참조 데이터
- 대한민국 국가인권위원회의 결정례와 상담사례 참조
- 문체 변경과 질의 응답으로 변경하기 위해서 전후 맥락을 고려한 예시문을 만들고 GPT-3.5-turbo 을 이용하여 원샷 학습후 문답 생성
## 데이터 구조
- 데이터 구조 : source_copus---URL
---URL
humane_right_copus_v1.jsonl
## 프롬프트 예시
## 인용
|
[
"# HRC: Building a human rights corpus for interactive generation models",
"## 참조 데이터 \n\n- 대한민국 국가인권위원회의 결정례와 상담사례 참조 \n- 문체 변경과 질의 응답으로 변경하기 위해서 전후 맥락을 고려한 예시문을 만들고 GPT-3.5-turbo 을 이용하여 원샷 학습후 문답 생성",
"## 데이터 구조 \n\n- 데이터 구조 : source_copus---URL \n ---URL \n humane_right_copus_v1.jsonl",
"## 프롬프트 예시",
"## 인용"
] |
[
"TAGS\n#license-cc-by-sa-3.0 #region-us \n",
"# HRC: Building a human rights corpus for interactive generation models",
"## 참조 데이터 \n\n- 대한민국 국가인권위원회의 결정례와 상담사례 참조 \n- 문체 변경과 질의 응답으로 변경하기 위해서 전후 맥락을 고려한 예시문을 만들고 GPT-3.5-turbo 을 이용하여 원샷 학습후 문답 생성",
"## 데이터 구조 \n\n- 데이터 구조 : source_copus---URL \n ---URL \n humane_right_copus_v1.jsonl",
"## 프롬프트 예시",
"## 인용"
] |
[
17,
13,
57,
29,
7,
3
] |
[
"passage: TAGS\n#license-cc-by-sa-3.0 #region-us \n# HRC: Building a human rights corpus for interactive generation models## 참조 데이터 \n\n- 대한민국 국가인권위원회의 결정례와 상담사례 참조 \n- 문체 변경과 질의 응답으로 변경하기 위해서 전후 맥락을 고려한 예시문을 만들고 GPT-3.5-turbo 을 이용하여 원샷 학습후 문답 생성## 데이터 구조 \n\n- 데이터 구조 : source_copus---URL \n ---URL \n humane_right_copus_v1.jsonl## 프롬프트 예시## 인용"
] |
4b292300a872f6f40f3cb51661f12d857fdc5030
|
# Dataset Card for "invention_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/invention_prompts
|
[
"region:us"
] |
2023-10-10T04:47:13+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 96461, "num_examples": 1000}], "download_size": 2138, "dataset_size": 96461}}
|
2023-10-10T04:47:15+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "invention_prompts"
More Information needed
|
[
"# Dataset Card for \"invention_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"invention_prompts\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"invention_prompts\"\n\nMore Information needed"
] |
481552efb46e227b1b9a4bbf69855833f0e92c52
|
# Dataset Card for "llama-2-nuv-intent-noE-pp"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Luciya/llama-2-nuv-intent-noE-pp
|
[
"region:us"
] |
2023-10-10T04:58:05+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 791845, "num_examples": 1585}], "download_size": 111893, "dataset_size": 791845}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-10T04:58:08+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "llama-2-nuv-intent-noE-pp"
More Information needed
|
[
"# Dataset Card for \"llama-2-nuv-intent-noE-pp\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"llama-2-nuv-intent-noE-pp\"\n\nMore Information needed"
] |
[
6,
24
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"llama-2-nuv-intent-noE-pp\"\n\nMore Information needed"
] |
29a03f0c94a2bc43738955ee2ee86614702811ec
|
# Dataset Card for "llama-2-nuv-intent-noE"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Luciya/llama-2-nuv-intent-noE
|
[
"region:us"
] |
2023-10-10T05:02:19+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 711010, "num_examples": 1585}], "download_size": 0, "dataset_size": 711010}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-10T05:04:10+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "llama-2-nuv-intent-noE"
More Information needed
|
[
"# Dataset Card for \"llama-2-nuv-intent-noE\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"llama-2-nuv-intent-noE\"\n\nMore Information needed"
] |
[
6,
22
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"llama-2-nuv-intent-noE\"\n\nMore Information needed"
] |
7b923b8d5cadd6b19a84d57cd36d06e14f09cb39
|
# Dataset Card for Evaluation run of krevas/LDCC-Instruct-Llama-2-ko-13B-v2
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/krevas/LDCC-Instruct-Llama-2-ko-13B-v2
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [krevas/LDCC-Instruct-Llama-2-ko-13B-v2](https://huggingface.co/krevas/LDCC-Instruct-Llama-2-ko-13B-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_krevas__LDCC-Instruct-Llama-2-ko-13B-v2",
"harness_truthfulqa_mc_0",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-10T06:04:26.663902](https://huggingface.co/datasets/open-llm-leaderboard/details_krevas__LDCC-Instruct-Llama-2-ko-13B-v2/blob/main/results_2023-10-10T06-04-26.663902.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.45958883488115343,
"acc_stderr": 0.034511714778603424,
"acc_norm": 0.4636864222606454,
"acc_norm_stderr": 0.03449288105358144,
"mc1": 0.2668298653610771,
"mc1_stderr": 0.015483691939237265,
"mc2": 0.39776112473254976,
"mc2_stderr": 0.013677730634490858
},
"harness|arc:challenge|25": {
"acc": 0.5298634812286689,
"acc_stderr": 0.014585305840007105,
"acc_norm": 0.5639931740614335,
"acc_norm_stderr": 0.014491225699230916
},
"harness|hellaswag|10": {
"acc": 0.6105357498506274,
"acc_stderr": 0.004866322258335963,
"acc_norm": 0.8181637124078869,
"acc_norm_stderr": 0.0038492126228151717
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.29,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.29,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.43703703703703706,
"acc_stderr": 0.04284958639753399,
"acc_norm": 0.43703703703703706,
"acc_norm_stderr": 0.04284958639753399
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.4342105263157895,
"acc_stderr": 0.040335656678483205,
"acc_norm": 0.4342105263157895,
"acc_norm_stderr": 0.040335656678483205
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.53,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.53,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.46037735849056605,
"acc_stderr": 0.030676096599389188,
"acc_norm": 0.46037735849056605,
"acc_norm_stderr": 0.030676096599389188
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.5486111111111112,
"acc_stderr": 0.041614023984032786,
"acc_norm": 0.5486111111111112,
"acc_norm_stderr": 0.041614023984032786
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.25,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.25,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.27,
"acc_stderr": 0.0446196043338474,
"acc_norm": 0.27,
"acc_norm_stderr": 0.0446196043338474
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.22,
"acc_stderr": 0.04163331998932268,
"acc_norm": 0.22,
"acc_norm_stderr": 0.04163331998932268
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.3352601156069364,
"acc_stderr": 0.03599586301247077,
"acc_norm": 0.3352601156069364,
"acc_norm_stderr": 0.03599586301247077
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.17647058823529413,
"acc_stderr": 0.03793281185307809,
"acc_norm": 0.17647058823529413,
"acc_norm_stderr": 0.03793281185307809
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.62,
"acc_stderr": 0.048783173121456316,
"acc_norm": 0.62,
"acc_norm_stderr": 0.048783173121456316
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.3617021276595745,
"acc_stderr": 0.03141082197596239,
"acc_norm": 0.3617021276595745,
"acc_norm_stderr": 0.03141082197596239
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2631578947368421,
"acc_stderr": 0.04142439719489359,
"acc_norm": 0.2631578947368421,
"acc_norm_stderr": 0.04142439719489359
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.3103448275862069,
"acc_stderr": 0.03855289616378948,
"acc_norm": 0.3103448275862069,
"acc_norm_stderr": 0.03855289616378948
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.24074074074074073,
"acc_stderr": 0.0220190800122179,
"acc_norm": 0.24074074074074073,
"acc_norm_stderr": 0.0220190800122179
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.23015873015873015,
"acc_stderr": 0.03764950879790606,
"acc_norm": 0.23015873015873015,
"acc_norm_stderr": 0.03764950879790606
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.27,
"acc_stderr": 0.044619604333847415,
"acc_norm": 0.27,
"acc_norm_stderr": 0.044619604333847415
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.5096774193548387,
"acc_stderr": 0.02843867799890955,
"acc_norm": 0.5096774193548387,
"acc_norm_stderr": 0.02843867799890955
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.3399014778325123,
"acc_stderr": 0.033327690684107895,
"acc_norm": 0.3399014778325123,
"acc_norm_stderr": 0.033327690684107895
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.43,
"acc_stderr": 0.049756985195624284,
"acc_norm": 0.43,
"acc_norm_stderr": 0.049756985195624284
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.5818181818181818,
"acc_stderr": 0.03851716319398395,
"acc_norm": 0.5818181818181818,
"acc_norm_stderr": 0.03851716319398395
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.5404040404040404,
"acc_stderr": 0.035507024651313425,
"acc_norm": 0.5404040404040404,
"acc_norm_stderr": 0.035507024651313425
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.689119170984456,
"acc_stderr": 0.033403619062765864,
"acc_norm": 0.689119170984456,
"acc_norm_stderr": 0.033403619062765864
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.40512820512820513,
"acc_stderr": 0.024890471769938145,
"acc_norm": 0.40512820512820513,
"acc_norm_stderr": 0.024890471769938145
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.27037037037037037,
"acc_stderr": 0.02708037281514565,
"acc_norm": 0.27037037037037037,
"acc_norm_stderr": 0.02708037281514565
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.40756302521008403,
"acc_stderr": 0.03191863374478465,
"acc_norm": 0.40756302521008403,
"acc_norm_stderr": 0.03191863374478465
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.25165562913907286,
"acc_stderr": 0.035433042343899844,
"acc_norm": 0.25165562913907286,
"acc_norm_stderr": 0.035433042343899844
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.618348623853211,
"acc_stderr": 0.02082814851702258,
"acc_norm": 0.618348623853211,
"acc_norm_stderr": 0.02082814851702258
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.2175925925925926,
"acc_stderr": 0.028139689444859672,
"acc_norm": 0.2175925925925926,
"acc_norm_stderr": 0.028139689444859672
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.6078431372549019,
"acc_stderr": 0.03426712349247273,
"acc_norm": 0.6078431372549019,
"acc_norm_stderr": 0.03426712349247273
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.6244725738396625,
"acc_stderr": 0.03152256243091156,
"acc_norm": 0.6244725738396625,
"acc_norm_stderr": 0.03152256243091156
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.547085201793722,
"acc_stderr": 0.033408675019233246,
"acc_norm": 0.547085201793722,
"acc_norm_stderr": 0.033408675019233246
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.5419847328244275,
"acc_stderr": 0.04369802690578756,
"acc_norm": 0.5419847328244275,
"acc_norm_stderr": 0.04369802690578756
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.6942148760330579,
"acc_stderr": 0.042059539338841226,
"acc_norm": 0.6942148760330579,
"acc_norm_stderr": 0.042059539338841226
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.5833333333333334,
"acc_stderr": 0.04766075165356462,
"acc_norm": 0.5833333333333334,
"acc_norm_stderr": 0.04766075165356462
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6134969325153374,
"acc_stderr": 0.038258255488486076,
"acc_norm": 0.6134969325153374,
"acc_norm_stderr": 0.038258255488486076
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.4017857142857143,
"acc_stderr": 0.04653333146973646,
"acc_norm": 0.4017857142857143,
"acc_norm_stderr": 0.04653333146973646
},
"harness|hendrycksTest-management|5": {
"acc": 0.6116504854368932,
"acc_stderr": 0.04825729337356389,
"acc_norm": 0.6116504854368932,
"acc_norm_stderr": 0.04825729337356389
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.7564102564102564,
"acc_stderr": 0.028120966503914425,
"acc_norm": 0.7564102564102564,
"acc_norm_stderr": 0.028120966503914425
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.56,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.56,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.6398467432950191,
"acc_stderr": 0.017166362471369295,
"acc_norm": 0.6398467432950191,
"acc_norm_stderr": 0.017166362471369295
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.5115606936416185,
"acc_stderr": 0.02691189868637792,
"acc_norm": 0.5115606936416185,
"acc_norm_stderr": 0.02691189868637792
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.2569832402234637,
"acc_stderr": 0.014614465821966337,
"acc_norm": 0.2569832402234637,
"acc_norm_stderr": 0.014614465821966337
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.4803921568627451,
"acc_stderr": 0.028607893699576063,
"acc_norm": 0.4803921568627451,
"acc_norm_stderr": 0.028607893699576063
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.5691318327974276,
"acc_stderr": 0.028125340983972714,
"acc_norm": 0.5691318327974276,
"acc_norm_stderr": 0.028125340983972714
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.558641975308642,
"acc_stderr": 0.027628737155668777,
"acc_norm": 0.558641975308642,
"acc_norm_stderr": 0.027628737155668777
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.35106382978723405,
"acc_stderr": 0.028473501272963768,
"acc_norm": 0.35106382978723405,
"acc_norm_stderr": 0.028473501272963768
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.37157757496740546,
"acc_stderr": 0.012341828514528285,
"acc_norm": 0.37157757496740546,
"acc_norm_stderr": 0.012341828514528285
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.3713235294117647,
"acc_stderr": 0.02934980313976587,
"acc_norm": 0.3713235294117647,
"acc_norm_stderr": 0.02934980313976587
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.48366013071895425,
"acc_stderr": 0.02021703065318646,
"acc_norm": 0.48366013071895425,
"acc_norm_stderr": 0.02021703065318646
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.5727272727272728,
"acc_stderr": 0.04738198703545483,
"acc_norm": 0.5727272727272728,
"acc_norm_stderr": 0.04738198703545483
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.34285714285714286,
"acc_stderr": 0.03038726291954773,
"acc_norm": 0.34285714285714286,
"acc_norm_stderr": 0.03038726291954773
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.6368159203980099,
"acc_stderr": 0.03400598505599014,
"acc_norm": 0.6368159203980099,
"acc_norm_stderr": 0.03400598505599014
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.63,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.63,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-virology|5": {
"acc": 0.42168674698795183,
"acc_stderr": 0.03844453181770917,
"acc_norm": 0.42168674698795183,
"acc_norm_stderr": 0.03844453181770917
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7251461988304093,
"acc_stderr": 0.03424042924691584,
"acc_norm": 0.7251461988304093,
"acc_norm_stderr": 0.03424042924691584
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2668298653610771,
"mc1_stderr": 0.015483691939237265,
"mc2": 0.39776112473254976,
"mc2_stderr": 0.013677730634490858
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_krevas__LDCC-Instruct-Llama-2-ko-13B-v2
|
[
"region:us"
] |
2023-10-10T05:04:45+00:00
|
{"pretty_name": "Evaluation run of krevas/LDCC-Instruct-Llama-2-ko-13B-v2", "dataset_summary": "Dataset automatically created during the evaluation run of model [krevas/LDCC-Instruct-Llama-2-ko-13B-v2](https://huggingface.co/krevas/LDCC-Instruct-Llama-2-ko-13B-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_krevas__LDCC-Instruct-Llama-2-ko-13B-v2\",\n\t\"harness_truthfulqa_mc_0\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-10T06:04:26.663902](https://huggingface.co/datasets/open-llm-leaderboard/details_krevas__LDCC-Instruct-Llama-2-ko-13B-v2/blob/main/results_2023-10-10T06-04-26.663902.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.45958883488115343,\n \"acc_stderr\": 0.034511714778603424,\n \"acc_norm\": 0.4636864222606454,\n \"acc_norm_stderr\": 0.03449288105358144,\n \"mc1\": 0.2668298653610771,\n \"mc1_stderr\": 0.015483691939237265,\n \"mc2\": 0.39776112473254976,\n \"mc2_stderr\": 0.013677730634490858\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5298634812286689,\n \"acc_stderr\": 0.014585305840007105,\n \"acc_norm\": 0.5639931740614335,\n \"acc_norm_stderr\": 0.014491225699230916\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6105357498506274,\n \"acc_stderr\": 0.004866322258335963,\n \"acc_norm\": 0.8181637124078869,\n \"acc_norm_stderr\": 0.0038492126228151717\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.43703703703703706,\n \"acc_stderr\": 0.04284958639753399,\n \"acc_norm\": 0.43703703703703706,\n \"acc_norm_stderr\": 0.04284958639753399\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.4342105263157895,\n \"acc_stderr\": 0.040335656678483205,\n \"acc_norm\": 0.4342105263157895,\n \"acc_norm_stderr\": 0.040335656678483205\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.53,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.53,\n \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.46037735849056605,\n \"acc_stderr\": 0.030676096599389188,\n \"acc_norm\": 0.46037735849056605,\n \"acc_norm_stderr\": 0.030676096599389188\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5486111111111112,\n \"acc_stderr\": 0.041614023984032786,\n \"acc_norm\": 0.5486111111111112,\n \"acc_norm_stderr\": 0.041614023984032786\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.25,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.25,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.0446196043338474,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.0446196043338474\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.22,\n \"acc_stderr\": 0.04163331998932268,\n \"acc_norm\": 0.22,\n \"acc_norm_stderr\": 0.04163331998932268\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.3352601156069364,\n \"acc_stderr\": 0.03599586301247077,\n \"acc_norm\": 0.3352601156069364,\n \"acc_norm_stderr\": 0.03599586301247077\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.17647058823529413,\n \"acc_stderr\": 0.03793281185307809,\n \"acc_norm\": 0.17647058823529413,\n \"acc_norm_stderr\": 0.03793281185307809\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.048783173121456316,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.048783173121456316\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.3617021276595745,\n \"acc_stderr\": 0.03141082197596239,\n \"acc_norm\": 0.3617021276595745,\n \"acc_norm_stderr\": 0.03141082197596239\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2631578947368421,\n \"acc_stderr\": 0.04142439719489359,\n \"acc_norm\": 0.2631578947368421,\n \"acc_norm_stderr\": 0.04142439719489359\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.3103448275862069,\n \"acc_stderr\": 0.03855289616378948,\n \"acc_norm\": 0.3103448275862069,\n \"acc_norm_stderr\": 0.03855289616378948\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.24074074074074073,\n \"acc_stderr\": 0.0220190800122179,\n \"acc_norm\": 0.24074074074074073,\n \"acc_norm_stderr\": 0.0220190800122179\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.23015873015873015,\n \"acc_stderr\": 0.03764950879790606,\n \"acc_norm\": 0.23015873015873015,\n \"acc_norm_stderr\": 0.03764950879790606\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.27,\n \"acc_stderr\": 0.044619604333847415,\n \"acc_norm\": 0.27,\n \"acc_norm_stderr\": 0.044619604333847415\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.5096774193548387,\n \"acc_stderr\": 0.02843867799890955,\n \"acc_norm\": 0.5096774193548387,\n \"acc_norm_stderr\": 0.02843867799890955\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.3399014778325123,\n \"acc_stderr\": 0.033327690684107895,\n \"acc_norm\": 0.3399014778325123,\n \"acc_norm_stderr\": 0.033327690684107895\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.43,\n \"acc_stderr\": 0.049756985195624284,\n \"acc_norm\": 0.43,\n \"acc_norm_stderr\": 0.049756985195624284\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.5818181818181818,\n \"acc_stderr\": 0.03851716319398395,\n \"acc_norm\": 0.5818181818181818,\n \"acc_norm_stderr\": 0.03851716319398395\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.5404040404040404,\n \"acc_stderr\": 0.035507024651313425,\n \"acc_norm\": 0.5404040404040404,\n \"acc_norm_stderr\": 0.035507024651313425\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.689119170984456,\n \"acc_stderr\": 0.033403619062765864,\n \"acc_norm\": 0.689119170984456,\n \"acc_norm_stderr\": 0.033403619062765864\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.40512820512820513,\n \"acc_stderr\": 0.024890471769938145,\n \"acc_norm\": 0.40512820512820513,\n \"acc_norm_stderr\": 0.024890471769938145\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.27037037037037037,\n \"acc_stderr\": 0.02708037281514565,\n \"acc_norm\": 0.27037037037037037,\n \"acc_norm_stderr\": 0.02708037281514565\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.40756302521008403,\n \"acc_stderr\": 0.03191863374478465,\n \"acc_norm\": 0.40756302521008403,\n \"acc_norm_stderr\": 0.03191863374478465\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.25165562913907286,\n \"acc_stderr\": 0.035433042343899844,\n \"acc_norm\": 0.25165562913907286,\n \"acc_norm_stderr\": 0.035433042343899844\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.618348623853211,\n \"acc_stderr\": 0.02082814851702258,\n \"acc_norm\": 0.618348623853211,\n \"acc_norm_stderr\": 0.02082814851702258\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.2175925925925926,\n \"acc_stderr\": 0.028139689444859672,\n \"acc_norm\": 0.2175925925925926,\n \"acc_norm_stderr\": 0.028139689444859672\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.6078431372549019,\n \"acc_stderr\": 0.03426712349247273,\n \"acc_norm\": 0.6078431372549019,\n \"acc_norm_stderr\": 0.03426712349247273\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.6244725738396625,\n \"acc_stderr\": 0.03152256243091156,\n \"acc_norm\": 0.6244725738396625,\n \"acc_norm_stderr\": 0.03152256243091156\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.547085201793722,\n \"acc_stderr\": 0.033408675019233246,\n \"acc_norm\": 0.547085201793722,\n \"acc_norm_stderr\": 0.033408675019233246\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.5419847328244275,\n \"acc_stderr\": 0.04369802690578756,\n \"acc_norm\": 0.5419847328244275,\n \"acc_norm_stderr\": 0.04369802690578756\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.6942148760330579,\n \"acc_stderr\": 0.042059539338841226,\n \"acc_norm\": 0.6942148760330579,\n \"acc_norm_stderr\": 0.042059539338841226\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.5833333333333334,\n \"acc_stderr\": 0.04766075165356462,\n \"acc_norm\": 0.5833333333333334,\n \"acc_norm_stderr\": 0.04766075165356462\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.6134969325153374,\n \"acc_stderr\": 0.038258255488486076,\n \"acc_norm\": 0.6134969325153374,\n \"acc_norm_stderr\": 0.038258255488486076\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.4017857142857143,\n \"acc_stderr\": 0.04653333146973646,\n \"acc_norm\": 0.4017857142857143,\n \"acc_norm_stderr\": 0.04653333146973646\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.6116504854368932,\n \"acc_stderr\": 0.04825729337356389,\n \"acc_norm\": 0.6116504854368932,\n \"acc_norm_stderr\": 0.04825729337356389\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.7564102564102564,\n \"acc_stderr\": 0.028120966503914425,\n \"acc_norm\": 0.7564102564102564,\n \"acc_norm_stderr\": 0.028120966503914425\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.56,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.56,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.6398467432950191,\n \"acc_stderr\": 0.017166362471369295,\n \"acc_norm\": 0.6398467432950191,\n \"acc_norm_stderr\": 0.017166362471369295\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.5115606936416185,\n \"acc_stderr\": 0.02691189868637792,\n \"acc_norm\": 0.5115606936416185,\n \"acc_norm_stderr\": 0.02691189868637792\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.2569832402234637,\n \"acc_stderr\": 0.014614465821966337,\n \"acc_norm\": 0.2569832402234637,\n \"acc_norm_stderr\": 0.014614465821966337\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.4803921568627451,\n \"acc_stderr\": 0.028607893699576063,\n \"acc_norm\": 0.4803921568627451,\n \"acc_norm_stderr\": 0.028607893699576063\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.5691318327974276,\n \"acc_stderr\": 0.028125340983972714,\n \"acc_norm\": 0.5691318327974276,\n \"acc_norm_stderr\": 0.028125340983972714\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.558641975308642,\n \"acc_stderr\": 0.027628737155668777,\n \"acc_norm\": 0.558641975308642,\n \"acc_norm_stderr\": 0.027628737155668777\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.35106382978723405,\n \"acc_stderr\": 0.028473501272963768,\n \"acc_norm\": 0.35106382978723405,\n \"acc_norm_stderr\": 0.028473501272963768\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.37157757496740546,\n \"acc_stderr\": 0.012341828514528285,\n \"acc_norm\": 0.37157757496740546,\n \"acc_norm_stderr\": 0.012341828514528285\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.3713235294117647,\n \"acc_stderr\": 0.02934980313976587,\n \"acc_norm\": 0.3713235294117647,\n \"acc_norm_stderr\": 0.02934980313976587\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.48366013071895425,\n \"acc_stderr\": 0.02021703065318646,\n \"acc_norm\": 0.48366013071895425,\n \"acc_norm_stderr\": 0.02021703065318646\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.5727272727272728,\n \"acc_stderr\": 0.04738198703545483,\n \"acc_norm\": 0.5727272727272728,\n \"acc_norm_stderr\": 0.04738198703545483\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.34285714285714286,\n \"acc_stderr\": 0.03038726291954773,\n \"acc_norm\": 0.34285714285714286,\n \"acc_norm_stderr\": 0.03038726291954773\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.6368159203980099,\n \"acc_stderr\": 0.03400598505599014,\n \"acc_norm\": 0.6368159203980099,\n \"acc_norm_stderr\": 0.03400598505599014\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.63,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.63,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.42168674698795183,\n \"acc_stderr\": 0.03844453181770917,\n \"acc_norm\": 0.42168674698795183,\n \"acc_norm_stderr\": 0.03844453181770917\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7251461988304093,\n \"acc_stderr\": 0.03424042924691584,\n \"acc_norm\": 0.7251461988304093,\n \"acc_norm_stderr\": 0.03424042924691584\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2668298653610771,\n \"mc1_stderr\": 0.015483691939237265,\n \"mc2\": 0.39776112473254976,\n \"mc2_stderr\": 0.013677730634490858\n }\n}\n```", "repo_url": "https://huggingface.co/krevas/LDCC-Instruct-Llama-2-ko-13B-v2", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|arc:challenge|25_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hellaswag|10_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T06-04-26.663902.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T06-04-26.663902.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_10T06_04_26.663902", "path": ["results_2023-10-10T06-04-26.663902.parquet"]}, {"split": "latest", "path": ["results_2023-10-10T06-04-26.663902.parquet"]}]}]}
|
2023-10-10T05:05:44+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of krevas/LDCC-Instruct-Llama-2-ko-13B-v2
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model krevas/LDCC-Instruct-Llama-2-ko-13B-v2 on the Open LLM Leaderboard.
The dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-10T06:04:26.663902(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of krevas/LDCC-Instruct-Llama-2-ko-13B-v2",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model krevas/LDCC-Instruct-Llama-2-ko-13B-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-10T06:04:26.663902(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of krevas/LDCC-Instruct-Llama-2-ko-13B-v2",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model krevas/LDCC-Instruct-Llama-2-ko-13B-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-10T06:04:26.663902(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
28,
31,
176,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of krevas/LDCC-Instruct-Llama-2-ko-13B-v2## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model krevas/LDCC-Instruct-Llama-2-ko-13B-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 61 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-10T06:04:26.663902(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
0bc43e2c7e08be4abf59273978efb0533fb5729d
|
# Dataset Card for "islamic_invention_prompts"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Falah/islamic_invention_prompts
|
[
"region:us"
] |
2023-10-10T05:11:41+00:00
|
{"dataset_info": {"features": [{"name": "prompts", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 124665, "num_examples": 1000}], "download_size": 2170, "dataset_size": 124665}}
|
2023-10-10T05:12:22+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "islamic_invention_prompts"
More Information needed
|
[
"# Dataset Card for \"islamic_invention_prompts\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"islamic_invention_prompts\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"islamic_invention_prompts\"\n\nMore Information needed"
] |
a2701f2155c1371ba38b92a2c41e8b0c0fca7614
|
# Dataset Card for "fingpt-sentiment-train"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
FinGPT/fingpt-sentiment-train
|
[
"region:us"
] |
2023-10-10T05:26:21+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 18860715, "num_examples": 76772}], "download_size": 6417302, "dataset_size": 18860715}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-11-20T21:45:14+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "fingpt-sentiment-train"
More Information needed
|
[
"# Dataset Card for \"fingpt-sentiment-train\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"fingpt-sentiment-train\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"fingpt-sentiment-train\"\n\nMore Information needed"
] |
96e7f0d361237bf5f9c5127e4006cffbab6b70ed
|
# Dataset Card for "fingpt-headline"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
FinGPT/fingpt-headline
|
[
"region:us"
] |
2023-10-10T05:31:29+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13343930, "num_examples": 82161}, {"name": "test", "num_bytes": 3339415, "num_examples": 20547}], "download_size": 647377, "dataset_size": 16683345}}
|
2023-10-10T05:31:55+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "fingpt-headline"
More Information needed
|
[
"# Dataset Card for \"fingpt-headline\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"fingpt-headline\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"fingpt-headline\"\n\nMore Information needed"
] |
6e416e215fec9436d718528483f831e9ef1e2652
|
# Dataset Card for "fingpt-ner"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
FinGPT/fingpt-ner
|
[
"region:us"
] |
2023-10-10T05:33:18+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 241523, "num_examples": 511}, {"name": "test", "num_bytes": 63634, "num_examples": 98}], "download_size": 105426, "dataset_size": 305157}}
|
2023-10-10T05:33:43+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "fingpt-ner"
More Information needed
|
[
"# Dataset Card for \"fingpt-ner\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"fingpt-ner\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"fingpt-ner\"\n\nMore Information needed"
] |
368e30d101f13c925cd500cc3aea920ab4cc0211
|
# Dataset Card for "fingpt-finred-re"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
FinGPT/fingpt-finred-re
|
[
"region:us"
] |
2023-10-10T05:36:53+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 11144078, "num_examples": 11400}, {"name": "test", "num_bytes": 2076314, "num_examples": 2136}], "download_size": 1290513, "dataset_size": 13220392}}
|
2023-10-10T05:40:16+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "fingpt-finred-re"
More Information needed
|
[
"# Dataset Card for \"fingpt-finred-re\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"fingpt-finred-re\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"fingpt-finred-re\"\n\nMore Information needed"
] |
130ed6276b6ba5cc188eb4eafa558f3312f5bc4d
|
# Dataset Card for "fingpt-convfinqa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
FinGPT/fingpt-convfinqa
|
[
"region:us"
] |
2023-10-10T05:37:17+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 52762154, "num_examples": 11104}, {"name": "test", "num_bytes": 6733552, "num_examples": 1490}], "download_size": 10979923, "dataset_size": 59495706}}
|
2023-10-10T05:44:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "fingpt-convfinqa"
More Information needed
|
[
"# Dataset Card for \"fingpt-convfinqa\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"fingpt-convfinqa\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"fingpt-convfinqa\"\n\nMore Information needed"
] |
c65953e39aaea4779efc54805c64769429c554c8
|
# Dataset Card for "fingpt-fiqa_qa"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
FinGPT/fingpt-fiqa_qa
|
[
"region:us"
] |
2023-10-10T05:37:38+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 20914549, "num_examples": 17110}], "download_size": 10813846, "dataset_size": 20914549}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-10T05:51:12+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "fingpt-fiqa_qa"
More Information needed
|
[
"# Dataset Card for \"fingpt-fiqa_qa\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"fingpt-fiqa_qa\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"fingpt-fiqa_qa\"\n\nMore Information needed"
] |
189d8fd915978eab7bcfed2aacf5a3f95ffadf94
|
# Dataset Card for Evaluation run of mistralai/Mistral-7B-Instruct-v0.1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_mistralai__Mistral-7B-Instruct-v0.1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-24T09:43:48.997990](https://huggingface.co/datasets/open-llm-leaderboard/details_mistralai__Mistral-7B-Instruct-v0.1/blob/main/results_2023-10-24T09-43-48.997990.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.37038590604026844,
"em_stderr": 0.00494543044549648,
"f1": 0.43100566275167973,
"f1_stderr": 0.00478990485809286,
"acc": 0.4398533245809979,
"acc_stderr": 0.01100025548646791
},
"harness|drop|3": {
"em": 0.37038590604026844,
"em_stderr": 0.00494543044549648,
"f1": 0.43100566275167973,
"f1_stderr": 0.00478990485809286
},
"harness|gsm8k|5": {
"acc": 0.1425322213798332,
"acc_stderr": 0.009629588445673814
},
"harness|winogrande|5": {
"acc": 0.7371744277821626,
"acc_stderr": 0.012370922527262006
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_mistralai__Mistral-7B-Instruct-v0.1
|
[
"region:us"
] |
2023-10-10T05:39:11+00:00
|
{"pretty_name": "Evaluation run of mistralai/Mistral-7B-Instruct-v0.1", "dataset_summary": "Dataset automatically created during the evaluation run of model [mistralai/Mistral-7B-Instruct-v0.1](https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_mistralai__Mistral-7B-Instruct-v0.1\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-24T09:43:48.997990](https://huggingface.co/datasets/open-llm-leaderboard/details_mistralai__Mistral-7B-Instruct-v0.1/blob/main/results_2023-10-24T09-43-48.997990.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.37038590604026844,\n \"em_stderr\": 0.00494543044549648,\n \"f1\": 0.43100566275167973,\n \"f1_stderr\": 0.00478990485809286,\n \"acc\": 0.4398533245809979,\n \"acc_stderr\": 0.01100025548646791\n },\n \"harness|drop|3\": {\n \"em\": 0.37038590604026844,\n \"em_stderr\": 0.00494543044549648,\n \"f1\": 0.43100566275167973,\n \"f1_stderr\": 0.00478990485809286\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.1425322213798332,\n \"acc_stderr\": 0.009629588445673814\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7371744277821626,\n \"acc_stderr\": 0.012370922527262006\n }\n}\n```", "repo_url": "https://huggingface.co/mistralai/Mistral-7B-Instruct-v0.1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|arc:challenge|25_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_24T09_43_48.997990", "path": ["**/details_harness|drop|3_2023-10-24T09-43-48.997990.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-24T09-43-48.997990.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_24T09_43_48.997990", "path": ["**/details_harness|gsm8k|5_2023-10-24T09-43-48.997990.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-24T09-43-48.997990.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hellaswag|10_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T06-38-48.353025.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T06-38-48.353025.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T06-38-48.353025.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_24T09_43_48.997990", "path": ["**/details_harness|winogrande|5_2023-10-24T09-43-48.997990.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-24T09-43-48.997990.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_10T06_38_48.353025", "path": ["results_2023-10-10T06-38-48.353025.parquet"]}, {"split": "2023_10_24T09_43_48.997990", "path": ["results_2023-10-24T09-43-48.997990.parquet"]}, {"split": "latest", "path": ["results_2023-10-24T09-43-48.997990.parquet"]}]}]}
|
2023-10-24T08:44:01+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of mistralai/Mistral-7B-Instruct-v0.1
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model mistralai/Mistral-7B-Instruct-v0.1 on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-24T09:43:48.997990(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of mistralai/Mistral-7B-Instruct-v0.1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model mistralai/Mistral-7B-Instruct-v0.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-24T09:43:48.997990(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of mistralai/Mistral-7B-Instruct-v0.1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model mistralai/Mistral-7B-Instruct-v0.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-24T09:43:48.997990(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
23,
31,
171,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of mistralai/Mistral-7B-Instruct-v0.1## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model mistralai/Mistral-7B-Instruct-v0.1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-24T09:43:48.997990(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
eb232364c89bfab768eb3b0bed11735aaf0a4b98
|
# Dataset Card for "fingpt-headline-cls"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
FinGPT/fingpt-headline-cls
|
[
"region:us"
] |
2023-10-10T05:39:19+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 12571278, "num_examples": 82161}, {"name": "test", "num_bytes": 3147768, "num_examples": 20547}], "download_size": 986960, "dataset_size": 15719046}}
|
2023-10-10T05:47:59+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "fingpt-headline-cls"
More Information needed
|
[
"# Dataset Card for \"fingpt-headline-cls\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"fingpt-headline-cls\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"fingpt-headline-cls\"\n\nMore Information needed"
] |
5527379ef81887a5f37231de630f97d2f2b26278
|
# Dataset Card for "fingpt-sentiment-cls"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
FinGPT/fingpt-sentiment-cls
|
[
"region:us"
] |
2023-10-10T05:39:32+00:00
|
{"dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10908696, "num_examples": 47557}], "download_size": 3902114, "dataset_size": 10908696}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-10T05:49:38+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "fingpt-sentiment-cls"
More Information needed
|
[
"# Dataset Card for \"fingpt-sentiment-cls\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"fingpt-sentiment-cls\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"fingpt-sentiment-cls\"\n\nMore Information needed"
] |
27939d47607e6fa1cc14342cae899943bb78a72b
|
# Dataset Card for "fingpt-ner-cls"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
FinGPT/fingpt-ner-cls
|
[
"region:us"
] |
2023-10-10T05:39:45+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 5730497, "num_examples": 13549}, {"name": "test", "num_bytes": 2112011, "num_examples": 3502}], "download_size": 298810, "dataset_size": 7842508}}
|
2023-10-10T05:42:34+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "fingpt-ner-cls"
More Information needed
|
[
"# Dataset Card for \"fingpt-ner-cls\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"fingpt-ner-cls\"\n\nMore Information needed"
] |
[
6,
18
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"fingpt-ner-cls\"\n\nMore Information needed"
] |
30231b971806109fa44030c44ecde0a1ede7482a
|
# Dataset Card for "fingpt-finred-cls"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
FinGPT/fingpt-finred-cls
|
[
"region:us"
] |
2023-10-10T05:41:40+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 23991756, "num_examples": 48474}, {"name": "test", "num_bytes": 3899700, "num_examples": 8928}], "download_size": 2897823, "dataset_size": 27891456}}
|
2023-10-10T05:41:54+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "fingpt-finred-cls"
More Information needed
|
[
"# Dataset Card for \"fingpt-finred-cls\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"fingpt-finred-cls\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"fingpt-finred-cls\"\n\nMore Information needed"
] |
d1020610473e9b227f367fc362daf705c3f7e8c0
|
# Dataset Card for "fingpt-fineval"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
FinGPT/fingpt-fineval
|
[
"region:us"
] |
2023-10-10T05:44:46+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 441991, "num_examples": 1056}, {"name": "test", "num_bytes": 117516, "num_examples": 265}], "download_size": 269193, "dataset_size": 559507}}
|
2023-10-10T05:45:52+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "fingpt-fineval"
More Information needed
|
[
"# Dataset Card for \"fingpt-fineval\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"fingpt-fineval\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"fingpt-fineval\"\n\nMore Information needed"
] |
de588d956e7027bda56e2bac97661f01708cf9d4
|
# Dataset Card for "odmeeting_oracle"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
kejian/odmeeting_oracle
|
[
"region:us"
] |
2023-10-10T05:48:34+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "Article", "dtype": "string"}, {"name": "Summary", "dtype": "string"}, {"name": "Query", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 25745453, "num_examples": 261}, {"name": "test", "num_bytes": 13442766, "num_examples": 131}, {"name": "validation", "num_bytes": 4115166, "num_examples": 44}], "download_size": 21293422, "dataset_size": 43303385}}
|
2023-10-10T05:48:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "odmeeting_oracle"
More Information needed
|
[
"# Dataset Card for \"odmeeting_oracle\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"odmeeting_oracle\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"odmeeting_oracle\"\n\nMore Information needed"
] |
b6cf24ee1f88df0ca8bff0c8788ce31716f9eb21
|
# Dataset Card for "llama-2-nuv-intent-noE-pp-oos"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Luciya/llama-2-nuv-intent-noE-pp-oos
|
[
"region:us"
] |
2023-10-10T05:50:05+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 921669, "num_examples": 1834}], "download_size": 134964, "dataset_size": 921669}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-10T05:50:06+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "llama-2-nuv-intent-noE-pp-oos"
More Information needed
|
[
"# Dataset Card for \"llama-2-nuv-intent-noE-pp-oos\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"llama-2-nuv-intent-noE-pp-oos\"\n\nMore Information needed"
] |
[
6,
27
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"llama-2-nuv-intent-noE-pp-oos\"\n\nMore Information needed"
] |
82c24e46fd479c9c99967fa60eaf6a0fd29690db
|
# Dataset Card for "llama-2-nuv-intent-noE-oos"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Luciya/llama-2-nuv-intent-noE-oos
|
[
"region:us"
] |
2023-10-10T05:50:17+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 828135, "num_examples": 1834}], "download_size": 127293, "dataset_size": 828135}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-10T05:50:18+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "llama-2-nuv-intent-noE-oos"
More Information needed
|
[
"# Dataset Card for \"llama-2-nuv-intent-noE-oos\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"llama-2-nuv-intent-noE-oos\"\n\nMore Information needed"
] |
[
6,
25
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"llama-2-nuv-intent-noE-oos\"\n\nMore Information needed"
] |
75ffc9c9086c7bbbd14d0068c35343d74c698ca5
|
# Dataset Card for "cpgqa_processed"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pbaoo2705/cpgqa_processed
|
[
"region:us"
] |
2023-10-10T05:53:18+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "answer", "dtype": "string"}, {"name": "start_positions", "dtype": "int64"}, {"name": "end_positions", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 9148601, "num_examples": 884}], "download_size": 190231, "dataset_size": 9148601}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-16T05:02:20+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "cpgqa_processed"
More Information needed
|
[
"# Dataset Card for \"cpgqa_processed\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"cpgqa_processed\"\n\nMore Information needed"
] |
[
6,
16
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"cpgqa_processed\"\n\nMore Information needed"
] |
722f38d6cc5826a1b39a7fe77ff3f5797b79206d
|
# Dataset Card for "cpgqa_processed_eval"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
pbaoo2705/cpgqa_processed_eval
|
[
"region:us"
] |
2023-10-10T05:53:20+00:00
|
{"dataset_info": {"features": [{"name": "title", "dtype": "string"}, {"name": "id", "dtype": "int64"}, {"name": "question", "dtype": "string"}, {"name": "answer_text", "dtype": "string"}, {"name": "answer_start", "dtype": "int64"}, {"name": "context", "dtype": "string"}, {"name": "input_ids", "sequence": "int32"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "answer", "dtype": "string"}, {"name": "start_positions", "dtype": "int64"}, {"name": "end_positions", "dtype": "int64"}], "splits": [{"name": "validation", "num_bytes": 1212109, "num_examples": 104}], "download_size": 35223, "dataset_size": 1212109}, "configs": [{"config_name": "default", "data_files": [{"split": "validation", "path": "data/validation-*"}]}]}
|
2023-10-16T05:02:20+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "cpgqa_processed_eval"
More Information needed
|
[
"# Dataset Card for \"cpgqa_processed_eval\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"cpgqa_processed_eval\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"cpgqa_processed_eval\"\n\nMore Information needed"
] |
a855cfa3dbba60024aa17511d59f0fb53fd443d8
|
# Dataset Card for "fingpt-finred"
This dataset consist of both Relation Extraction part and Classification part, and it used in Multi-task Instruction Tuning
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
FinGPT/fingpt-finred
|
[
"region:us"
] |
2023-10-10T05:56:22+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}, {"name": "instruction", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 25113554, "num_examples": 27558}, {"name": "test", "num_bytes": 4477146, "num_examples": 5112}], "download_size": 2114835, "dataset_size": 29590700}}
|
2023-10-10T05:58:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "fingpt-finred"
This dataset consist of both Relation Extraction part and Classification part, and it used in Multi-task Instruction Tuning
More Information needed
|
[
"# Dataset Card for \"fingpt-finred\"\n\nThis dataset consist of both Relation Extraction part and Classification part, and it used in Multi-task Instruction Tuning \n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"fingpt-finred\"\n\nThis dataset consist of both Relation Extraction part and Classification part, and it used in Multi-task Instruction Tuning \n\nMore Information needed"
] |
[
6,
43
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"fingpt-finred\"\n\nThis dataset consist of both Relation Extraction part and Classification part, and it used in Multi-task Instruction Tuning \n\nMore Information needed"
] |
6ef24581655532329ab7baeca9800b4bd87e200c
|
# Dataset Card for Evaluation run of maywell/Synatra-V0.1-7B-Instruct
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/maywell/Synatra-V0.1-7B-Instruct
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [maywell/Synatra-V0.1-7B-Instruct](https://huggingface.co/maywell/Synatra-V0.1-7B-Instruct) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_maywell__Synatra-V0.1-7B-Instruct_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-06T18:05:12.244898](https://huggingface.co/datasets/open-llm-leaderboard/details_maywell__Synatra-V0.1-7B-Instruct_public/blob/main/results_2023-11-06T18-05-12.244898.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.32246224832214765,
"em_stderr": 0.004786806140711669,
"f1": 0.3963055788590608,
"f1_stderr": 0.004634063813539812,
"acc": 0.46089483255174657,
"acc_stderr": 0.011702308149823175
},
"harness|drop|3": {
"em": 0.32246224832214765,
"em_stderr": 0.004786806140711669,
"f1": 0.3963055788590608,
"f1_stderr": 0.004634063813539812
},
"harness|gsm8k|5": {
"acc": 0.19408642911296436,
"acc_stderr": 0.010893918308192417
},
"harness|winogrande|5": {
"acc": 0.7277032359905288,
"acc_stderr": 0.012510697991453932
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_maywell__Synatra-V0.1-7B-Instruct
|
[
"region:us"
] |
2023-10-10T05:57:27+00:00
|
{"pretty_name": "Evaluation run of maywell/Synatra-V0.1-7B-Instruct", "dataset_summary": "Dataset automatically created during the evaluation run of model [maywell/Synatra-V0.1-7B-Instruct](https://huggingface.co/maywell/Synatra-V0.1-7B-Instruct) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_maywell__Synatra-V0.1-7B-Instruct_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-06T18:05:12.244898](https://huggingface.co/datasets/open-llm-leaderboard/details_maywell__Synatra-V0.1-7B-Instruct_public/blob/main/results_2023-11-06T18-05-12.244898.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.32246224832214765,\n \"em_stderr\": 0.004786806140711669,\n \"f1\": 0.3963055788590608,\n \"f1_stderr\": 0.004634063813539812,\n \"acc\": 0.46089483255174657,\n \"acc_stderr\": 0.011702308149823175\n },\n \"harness|drop|3\": {\n \"em\": 0.32246224832214765,\n \"em_stderr\": 0.004786806140711669,\n \"f1\": 0.3963055788590608,\n \"f1_stderr\": 0.004634063813539812\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.19408642911296436,\n \"acc_stderr\": 0.010893918308192417\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7277032359905288,\n \"acc_stderr\": 0.012510697991453932\n }\n}\n```", "repo_url": "https://huggingface.co/maywell/Synatra-V0.1-7B-Instruct", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_06T18_05_12.244898", "path": ["**/details_harness|drop|3_2023-11-06T18-05-12.244898.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-06T18-05-12.244898.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_06T18_05_12.244898", "path": ["**/details_harness|gsm8k|5_2023-11-06T18-05-12.244898.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-06T18-05-12.244898.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_06T18_05_12.244898", "path": ["**/details_harness|winogrande|5_2023-11-06T18-05-12.244898.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-06T18-05-12.244898.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_06T18_05_12.244898", "path": ["results_2023-11-06T18-05-12.244898.parquet"]}, {"split": "latest", "path": ["results_2023-11-06T18-05-12.244898.parquet"]}]}]}
|
2023-12-01T14:38:43+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of maywell/Synatra-V0.1-7B-Instruct
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model maywell/Synatra-V0.1-7B-Instruct on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-06T18:05:12.244898(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of maywell/Synatra-V0.1-7B-Instruct",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model maywell/Synatra-V0.1-7B-Instruct on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-06T18:05:12.244898(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of maywell/Synatra-V0.1-7B-Instruct",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model maywell/Synatra-V0.1-7B-Instruct on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-06T18:05:12.244898(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
23,
31,
172,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of maywell/Synatra-V0.1-7B-Instruct## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model maywell/Synatra-V0.1-7B-Instruct on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-06T18:05:12.244898(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
2eb93ff40b951dc7e5b182a94656ecf108cb09eb
|
# Dataset Card for Evaluation run of JosephusCheung/Pwen-7B-Chat-20_30
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/JosephusCheung/Pwen-7B-Chat-20_30
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [JosephusCheung/Pwen-7B-Chat-20_30](https://huggingface.co/JosephusCheung/Pwen-7B-Chat-20_30) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_JosephusCheung__Pwen-7B-Chat-20_30",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-26T02:42:36.258115](https://huggingface.co/datasets/open-llm-leaderboard/details_JosephusCheung__Pwen-7B-Chat-20_30/blob/main/results_2023-10-26T02-42-36.258115.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.2954068791946309,
"em_stderr": 0.004672175556184236,
"f1": 0.3814209312080561,
"f1_stderr": 0.004573085663083055,
"acc": 0.44525521893903264,
"acc_stderr": 0.012103729416391124
},
"harness|drop|3": {
"em": 0.2954068791946309,
"em_stderr": 0.004672175556184236,
"f1": 0.3814209312080561,
"f1_stderr": 0.004573085663083055
},
"harness|gsm8k|5": {
"acc": 0.20621683093252463,
"acc_stderr": 0.011144364089781436
},
"harness|winogrande|5": {
"acc": 0.6842936069455406,
"acc_stderr": 0.01306309474300081
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_JosephusCheung__Pwen-7B-Chat-20_30
|
[
"region:us"
] |
2023-10-10T06:01:37+00:00
|
{"pretty_name": "Evaluation run of JosephusCheung/Pwen-7B-Chat-20_30", "dataset_summary": "Dataset automatically created during the evaluation run of model [JosephusCheung/Pwen-7B-Chat-20_30](https://huggingface.co/JosephusCheung/Pwen-7B-Chat-20_30) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_JosephusCheung__Pwen-7B-Chat-20_30\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-26T02:42:36.258115](https://huggingface.co/datasets/open-llm-leaderboard/details_JosephusCheung__Pwen-7B-Chat-20_30/blob/main/results_2023-10-26T02-42-36.258115.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.2954068791946309,\n \"em_stderr\": 0.004672175556184236,\n \"f1\": 0.3814209312080561,\n \"f1_stderr\": 0.004573085663083055,\n \"acc\": 0.44525521893903264,\n \"acc_stderr\": 0.012103729416391124\n },\n \"harness|drop|3\": {\n \"em\": 0.2954068791946309,\n \"em_stderr\": 0.004672175556184236,\n \"f1\": 0.3814209312080561,\n \"f1_stderr\": 0.004573085663083055\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.20621683093252463,\n \"acc_stderr\": 0.011144364089781436\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6842936069455406,\n \"acc_stderr\": 0.01306309474300081\n }\n}\n```", "repo_url": "https://huggingface.co/JosephusCheung/Pwen-7B-Chat-20_30", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|arc:challenge|25_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_26T02_42_36.258115", "path": ["**/details_harness|drop|3_2023-10-26T02-42-36.258115.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-26T02-42-36.258115.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_26T02_42_36.258115", "path": ["**/details_harness|gsm8k|5_2023-10-26T02-42-36.258115.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-26T02-42-36.258115.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hellaswag|10_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T07-01-15.573690.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T07-01-15.573690.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T07-01-15.573690.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_26T02_42_36.258115", "path": ["**/details_harness|winogrande|5_2023-10-26T02-42-36.258115.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-26T02-42-36.258115.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_10T07_01_15.573690", "path": ["results_2023-10-10T07-01-15.573690.parquet"]}, {"split": "2023_10_26T02_42_36.258115", "path": ["results_2023-10-26T02-42-36.258115.parquet"]}, {"split": "latest", "path": ["results_2023-10-26T02-42-36.258115.parquet"]}]}]}
|
2023-10-26T01:42:49+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of JosephusCheung/Pwen-7B-Chat-20_30
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model JosephusCheung/Pwen-7B-Chat-20_30 on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-26T02:42:36.258115(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of JosephusCheung/Pwen-7B-Chat-20_30",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model JosephusCheung/Pwen-7B-Chat-20_30 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-26T02:42:36.258115(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of JosephusCheung/Pwen-7B-Chat-20_30",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model JosephusCheung/Pwen-7B-Chat-20_30 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-26T02:42:36.258115(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
23,
31,
171,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of JosephusCheung/Pwen-7B-Chat-20_30## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model JosephusCheung/Pwen-7B-Chat-20_30 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-26T02:42:36.258115(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
d1651261e3d7127ec944bb986c8010ca79594423
|
# Dataset Card for "my-txt2img-dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
zhongzero/my-txt2img-dataset
|
[
"region:us"
] |
2023-10-10T06:15:28+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 177336.0, "num_examples": 1}], "download_size": 178171, "dataset_size": 177336.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-24T06:43:53+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "my-txt2img-dataset"
More Information needed
|
[
"# Dataset Card for \"my-txt2img-dataset\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"my-txt2img-dataset\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"my-txt2img-dataset\"\n\nMore Information needed"
] |
a022af7e9265de4748d67d36b5189c6a33c2493c
|
<s>[INST] Compose a professional email with the following points: Me chinese cook 10 years Good good cook People love my food Can you hire me? [/INST] Thanks for the information. Unfortunately, your initial message does not contain enough details or structure to compose a compelling professional email to a potential employer. Here are some suggestions for how we can improve the message: 1) Let's first clarify the type of cooking position you are interested in. Are you looking for a job as a line cook, sous chef, executive chef, or something else? This will help determine how to emphasize relevant experience and skills. 2) What specific experience do you have? For example, "Over 10 years of experience preparing authentic Sichuan and Cantonese cuisine." Highlight any relevant skills, like knife skills, cooking techniques, menu development, etc. Mention the types of cuisine you specialize in. 3) Have you worked in a professional kitchen before? If so, mention the name and type of restaurant (upscale, casual dining, etc.). Emphasize relevant responsibilities and accomplishments. If not, that's fine too, just be clear that you're looking to transition into a professional kitchen and highlight related skills and passion for cooking. 4) Do you have any certifications or training? Note any culinary degrees, certificates, or other credentials. Even informal training can show your dedication to the craft. 5) Are you looking for full-time, part-time, temporary, or contract work? Be specific about the type of opportunity you're seeking. With the answers to these questions, we can craft an email in a standard business letter format. The opening can express your interest in the position and summarize your relevant experience and qualifications. The body can go into more depth on your experience, skills, and goals. The closing can reiterate your interest and mention that you'd welcome the opportunity to further discuss the role. Does this help? Let me know if you would like help crafting the full email. I'm happy to assist you in composing a professional and compelling message to potential employers. </s>
|
jpiorko/marketingagency
|
[
"region:us"
] |
2023-10-10T06:21:51+00:00
|
{}
|
2023-10-11T06:10:44+00:00
|
[] |
[] |
TAGS
#region-us
|
<s>[INST] Compose a professional email with the following points: Me chinese cook 10 years Good good cook People love my food Can you hire me? [/INST] Thanks for the information. Unfortunately, your initial message does not contain enough details or structure to compose a compelling professional email to a potential employer. Here are some suggestions for how we can improve the message: 1) Let's first clarify the type of cooking position you are interested in. Are you looking for a job as a line cook, sous chef, executive chef, or something else? This will help determine how to emphasize relevant experience and skills. 2) What specific experience do you have? For example, "Over 10 years of experience preparing authentic Sichuan and Cantonese cuisine." Highlight any relevant skills, like knife skills, cooking techniques, menu development, etc. Mention the types of cuisine you specialize in. 3) Have you worked in a professional kitchen before? If so, mention the name and type of restaurant (upscale, casual dining, etc.). Emphasize relevant responsibilities and accomplishments. If not, that's fine too, just be clear that you're looking to transition into a professional kitchen and highlight related skills and passion for cooking. 4) Do you have any certifications or training? Note any culinary degrees, certificates, or other credentials. Even informal training can show your dedication to the craft. 5) Are you looking for full-time, part-time, temporary, or contract work? Be specific about the type of opportunity you're seeking. With the answers to these questions, we can craft an email in a standard business letter format. The opening can express your interest in the position and summarize your relevant experience and qualifications. The body can go into more depth on your experience, skills, and goals. The closing can reiterate your interest and mention that you'd welcome the opportunity to further discuss the role. Does this help? Let me know if you would like help crafting the full email. I'm happy to assist you in composing a professional and compelling message to potential employers. </s>
|
[] |
[
"TAGS\n#region-us \n"
] |
[
6
] |
[
"passage: TAGS\n#region-us \n"
] |
babe940f5b40a6f8ec6f072a0db9d9af429b0382
|
# Dataset Card for Evaluation run of hiyouga/Baichuan2-7B-Base-LLaMAfied
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/hiyouga/Baichuan2-7B-Base-LLaMAfied
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [hiyouga/Baichuan2-7B-Base-LLaMAfied](https://huggingface.co/hiyouga/Baichuan2-7B-Base-LLaMAfied) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_hiyouga__Baichuan2-7B-Base-LLaMAfied",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-26T06:44:24.493952](https://huggingface.co/datasets/open-llm-leaderboard/details_hiyouga__Baichuan2-7B-Base-LLaMAfied/blob/main/results_2023-10-26T06-44-24.493952.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.001572986577181208,
"em_stderr": 0.0004058451132417743,
"f1": 0.0585476090604028,
"f1_stderr": 0.0013740361163735455,
"acc": 0.3926358910777041,
"acc_stderr": 0.010089987799825416
},
"harness|drop|3": {
"em": 0.001572986577181208,
"em_stderr": 0.0004058451132417743,
"f1": 0.0585476090604028,
"f1_stderr": 0.0013740361163735455
},
"harness|gsm8k|5": {
"acc": 0.07808946171341925,
"acc_stderr": 0.007390654481108214
},
"harness|winogrande|5": {
"acc": 0.7071823204419889,
"acc_stderr": 0.01278932111854262
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_hiyouga__Baichuan2-7B-Base-LLaMAfied
|
[
"region:us"
] |
2023-10-10T06:26:05+00:00
|
{"pretty_name": "Evaluation run of hiyouga/Baichuan2-7B-Base-LLaMAfied", "dataset_summary": "Dataset automatically created during the evaluation run of model [hiyouga/Baichuan2-7B-Base-LLaMAfied](https://huggingface.co/hiyouga/Baichuan2-7B-Base-LLaMAfied) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_hiyouga__Baichuan2-7B-Base-LLaMAfied\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-26T06:44:24.493952](https://huggingface.co/datasets/open-llm-leaderboard/details_hiyouga__Baichuan2-7B-Base-LLaMAfied/blob/main/results_2023-10-26T06-44-24.493952.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.001572986577181208,\n \"em_stderr\": 0.0004058451132417743,\n \"f1\": 0.0585476090604028,\n \"f1_stderr\": 0.0013740361163735455,\n \"acc\": 0.3926358910777041,\n \"acc_stderr\": 0.010089987799825416\n },\n \"harness|drop|3\": {\n \"em\": 0.001572986577181208,\n \"em_stderr\": 0.0004058451132417743,\n \"f1\": 0.0585476090604028,\n \"f1_stderr\": 0.0013740361163735455\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.07808946171341925,\n \"acc_stderr\": 0.007390654481108214\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7071823204419889,\n \"acc_stderr\": 0.01278932111854262\n }\n}\n```", "repo_url": "https://huggingface.co/hiyouga/Baichuan2-7B-Base-LLaMAfied", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|arc:challenge|25_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_26T06_44_24.493952", "path": ["**/details_harness|drop|3_2023-10-26T06-44-24.493952.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-26T06-44-24.493952.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_26T06_44_24.493952", "path": ["**/details_harness|gsm8k|5_2023-10-26T06-44-24.493952.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-26T06-44-24.493952.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hellaswag|10_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T07-25-43.126145.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T07-25-43.126145.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T07-25-43.126145.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_26T06_44_24.493952", "path": ["**/details_harness|winogrande|5_2023-10-26T06-44-24.493952.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-26T06-44-24.493952.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_10T07_25_43.126145", "path": ["results_2023-10-10T07-25-43.126145.parquet"]}, {"split": "2023_10_26T06_44_24.493952", "path": ["results_2023-10-26T06-44-24.493952.parquet"]}, {"split": "latest", "path": ["results_2023-10-26T06-44-24.493952.parquet"]}]}]}
|
2023-10-26T05:44:37+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of hiyouga/Baichuan2-7B-Base-LLaMAfied
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model hiyouga/Baichuan2-7B-Base-LLaMAfied on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-26T06:44:24.493952(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of hiyouga/Baichuan2-7B-Base-LLaMAfied",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model hiyouga/Baichuan2-7B-Base-LLaMAfied on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-26T06:44:24.493952(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of hiyouga/Baichuan2-7B-Base-LLaMAfied",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model hiyouga/Baichuan2-7B-Base-LLaMAfied on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-26T06:44:24.493952(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
27,
31,
175,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of hiyouga/Baichuan2-7B-Base-LLaMAfied## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model hiyouga/Baichuan2-7B-Base-LLaMAfied on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-26T06:44:24.493952(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
b59665c36e1589615e82fc5411ddfde9a6b8997a
|
# Dataset Card for Evaluation run of hiyouga/Baichuan2-7B-Chat-LLaMAfied
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/hiyouga/Baichuan2-7B-Chat-LLaMAfied
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [hiyouga/Baichuan2-7B-Chat-LLaMAfied](https://huggingface.co/hiyouga/Baichuan2-7B-Chat-LLaMAfied) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_hiyouga__Baichuan2-7B-Chat-LLaMAfied",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-27T01:55:17.464897](https://huggingface.co/datasets/open-llm-leaderboard/details_hiyouga__Baichuan2-7B-Chat-LLaMAfied/blob/main/results_2023-10-27T01-55-17.464897.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.2062709731543624,
"em_stderr": 0.004143762363131985,
"f1": 0.26938129194630883,
"f1_stderr": 0.004172682699820514,
"acc": 0.40028530858265426,
"acc_stderr": 0.010786124750718863
},
"harness|drop|3": {
"em": 0.2062709731543624,
"em_stderr": 0.004143762363131985,
"f1": 0.26938129194630883,
"f1_stderr": 0.004172682699820514
},
"harness|gsm8k|5": {
"acc": 0.10917361637604246,
"acc_stderr": 0.008590089300511151
},
"harness|winogrande|5": {
"acc": 0.691397000789266,
"acc_stderr": 0.012982160200926577
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_hiyouga__Baichuan2-7B-Chat-LLaMAfied
|
[
"region:us"
] |
2023-10-10T06:31:24+00:00
|
{"pretty_name": "Evaluation run of hiyouga/Baichuan2-7B-Chat-LLaMAfied", "dataset_summary": "Dataset automatically created during the evaluation run of model [hiyouga/Baichuan2-7B-Chat-LLaMAfied](https://huggingface.co/hiyouga/Baichuan2-7B-Chat-LLaMAfied) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_hiyouga__Baichuan2-7B-Chat-LLaMAfied\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-27T01:55:17.464897](https://huggingface.co/datasets/open-llm-leaderboard/details_hiyouga__Baichuan2-7B-Chat-LLaMAfied/blob/main/results_2023-10-27T01-55-17.464897.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.2062709731543624,\n \"em_stderr\": 0.004143762363131985,\n \"f1\": 0.26938129194630883,\n \"f1_stderr\": 0.004172682699820514,\n \"acc\": 0.40028530858265426,\n \"acc_stderr\": 0.010786124750718863\n },\n \"harness|drop|3\": {\n \"em\": 0.2062709731543624,\n \"em_stderr\": 0.004143762363131985,\n \"f1\": 0.26938129194630883,\n \"f1_stderr\": 0.004172682699820514\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.10917361637604246,\n \"acc_stderr\": 0.008590089300511151\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.691397000789266,\n \"acc_stderr\": 0.012982160200926577\n }\n}\n```", "repo_url": "https://huggingface.co/hiyouga/Baichuan2-7B-Chat-LLaMAfied", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|arc:challenge|25_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_27T01_55_17.464897", "path": ["**/details_harness|drop|3_2023-10-27T01-55-17.464897.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-27T01-55-17.464897.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_27T01_55_17.464897", "path": ["**/details_harness|gsm8k|5_2023-10-27T01-55-17.464897.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-27T01-55-17.464897.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hellaswag|10_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T07-31-02.024016.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T07-31-02.024016.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T07-31-02.024016.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_27T01_55_17.464897", "path": ["**/details_harness|winogrande|5_2023-10-27T01-55-17.464897.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-27T01-55-17.464897.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_10T07_31_02.024016", "path": ["results_2023-10-10T07-31-02.024016.parquet"]}, {"split": "2023_10_27T01_55_17.464897", "path": ["results_2023-10-27T01-55-17.464897.parquet"]}, {"split": "latest", "path": ["results_2023-10-27T01-55-17.464897.parquet"]}]}]}
|
2023-10-27T00:55:32+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of hiyouga/Baichuan2-7B-Chat-LLaMAfied
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model hiyouga/Baichuan2-7B-Chat-LLaMAfied on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-27T01:55:17.464897(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of hiyouga/Baichuan2-7B-Chat-LLaMAfied",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model hiyouga/Baichuan2-7B-Chat-LLaMAfied on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-27T01:55:17.464897(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of hiyouga/Baichuan2-7B-Chat-LLaMAfied",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model hiyouga/Baichuan2-7B-Chat-LLaMAfied on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-27T01:55:17.464897(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
26,
31,
174,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of hiyouga/Baichuan2-7B-Chat-LLaMAfied## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model hiyouga/Baichuan2-7B-Chat-LLaMAfied on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-27T01:55:17.464897(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
9c706f61651407e1569f68803b2bbf6c4b28c0da
|
# Dataset Card for "Data_no_pad"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
zardat/Data_no_pad
|
[
"region:us"
] |
2023-10-10T06:35:56+00:00
|
{"dataset_info": {"features": [{"name": "x", "sequence": {"sequence": "float32"}}, {"name": "edge_index", "sequence": {"sequence": "float32"}}, {"name": "edge_attr", "sequence": "float32"}, {"name": "y", "dtype": "float32"}], "splits": [{"name": "train", "num_bytes": 9184112, "num_examples": 1018}], "download_size": 2796188, "dataset_size": 9184112}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-10T06:36:13+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "Data_no_pad"
More Information needed
|
[
"# Dataset Card for \"Data_no_pad\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"Data_no_pad\"\n\nMore Information needed"
] |
[
6,
15
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"Data_no_pad\"\n\nMore Information needed"
] |
1a47da1f40f2f15ec0a0659a090c51a2fab1de3c
|
# Dataset Card for Evaluation run of JosephusCheung/Qwen-VL-LLaMAfied-7B-Chat
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/JosephusCheung/Qwen-VL-LLaMAfied-7B-Chat
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [JosephusCheung/Qwen-VL-LLaMAfied-7B-Chat](https://huggingface.co/JosephusCheung/Qwen-VL-LLaMAfied-7B-Chat) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_JosephusCheung__Qwen-VL-LLaMAfied-7B-Chat_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-06T17:59:10.856732](https://huggingface.co/datasets/open-llm-leaderboard/details_JosephusCheung__Qwen-VL-LLaMAfied-7B-Chat_public/blob/main/results_2023-11-06T17-59-10.856732.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.2633179530201342,
"em_stderr": 0.004510450588757744,
"f1": 0.33739093959731714,
"f1_stderr": 0.00449081714733481,
"acc": 0.3283346487766377,
"acc_stderr": 0.006672411592679005
},
"harness|drop|3": {
"em": 0.2633179530201342,
"em_stderr": 0.004510450588757744,
"f1": 0.33739093959731714,
"f1_stderr": 0.00449081714733481
},
"harness|gsm8k|5": {
"acc": 0.0,
"acc_stderr": 0.0
},
"harness|winogrande|5": {
"acc": 0.6566692975532754,
"acc_stderr": 0.01334482318535801
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_JosephusCheung__Qwen-VL-LLaMAfied-7B-Chat
|
[
"region:us"
] |
2023-10-10T06:40:08+00:00
|
{"pretty_name": "Evaluation run of JosephusCheung/Qwen-VL-LLaMAfied-7B-Chat", "dataset_summary": "Dataset automatically created during the evaluation run of model [JosephusCheung/Qwen-VL-LLaMAfied-7B-Chat](https://huggingface.co/JosephusCheung/Qwen-VL-LLaMAfied-7B-Chat) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_JosephusCheung__Qwen-VL-LLaMAfied-7B-Chat_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-06T17:59:10.856732](https://huggingface.co/datasets/open-llm-leaderboard/details_JosephusCheung__Qwen-VL-LLaMAfied-7B-Chat_public/blob/main/results_2023-11-06T17-59-10.856732.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.2633179530201342,\n \"em_stderr\": 0.004510450588757744,\n \"f1\": 0.33739093959731714,\n \"f1_stderr\": 0.00449081714733481,\n \"acc\": 0.3283346487766377,\n \"acc_stderr\": 0.006672411592679005\n },\n \"harness|drop|3\": {\n \"em\": 0.2633179530201342,\n \"em_stderr\": 0.004510450588757744,\n \"f1\": 0.33739093959731714,\n \"f1_stderr\": 0.00449081714733481\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.0,\n \"acc_stderr\": 0.0\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6566692975532754,\n \"acc_stderr\": 0.01334482318535801\n }\n}\n```", "repo_url": "https://huggingface.co/JosephusCheung/Qwen-VL-LLaMAfied-7B-Chat", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_04T23_03_04.341481", "path": ["**/details_harness|drop|3_2023-11-04T23-03-04.341481.parquet"]}, {"split": "2023_11_06T17_59_10.856732", "path": ["**/details_harness|drop|3_2023-11-06T17-59-10.856732.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-06T17-59-10.856732.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_04T23_03_04.341481", "path": ["**/details_harness|gsm8k|5_2023-11-04T23-03-04.341481.parquet"]}, {"split": "2023_11_06T17_59_10.856732", "path": ["**/details_harness|gsm8k|5_2023-11-06T17-59-10.856732.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-06T17-59-10.856732.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_04T23_03_04.341481", "path": ["**/details_harness|winogrande|5_2023-11-04T23-03-04.341481.parquet"]}, {"split": "2023_11_06T17_59_10.856732", "path": ["**/details_harness|winogrande|5_2023-11-06T17-59-10.856732.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-06T17-59-10.856732.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_04T23_03_04.341481", "path": ["results_2023-11-04T23-03-04.341481.parquet"]}, {"split": "2023_11_06T17_59_10.856732", "path": ["results_2023-11-06T17-59-10.856732.parquet"]}, {"split": "latest", "path": ["results_2023-11-06T17-59-10.856732.parquet"]}]}]}
|
2023-12-01T14:08:15+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of JosephusCheung/Qwen-VL-LLaMAfied-7B-Chat
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model JosephusCheung/Qwen-VL-LLaMAfied-7B-Chat on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-06T17:59:10.856732(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of JosephusCheung/Qwen-VL-LLaMAfied-7B-Chat",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model JosephusCheung/Qwen-VL-LLaMAfied-7B-Chat on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-06T17:59:10.856732(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of JosephusCheung/Qwen-VL-LLaMAfied-7B-Chat",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model JosephusCheung/Qwen-VL-LLaMAfied-7B-Chat on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-06T17:59:10.856732(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
28,
31,
177,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of JosephusCheung/Qwen-VL-LLaMAfied-7B-Chat## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model JosephusCheung/Qwen-VL-LLaMAfied-7B-Chat on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-06T17:59:10.856732(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
e534bb8de535723243c4b16dbb41bd1f7654113e
|
* HuggingfaceのAPIを利用したloading Scriptを試すための練習リポジトリです。
* データの内容は、<a href="https://huggingface.co/datasets/mickylan2367/GraySpectrogram2">mickylan2367/GraySpectrogram</a>とほぼ同じです
|
mickylan2367/LoadingScriptPractice
|
[
"language:en",
"license:cc-by-sa-4.0",
"music",
"region:us"
] |
2023-10-10T06:44:32+00:00
|
{"language": ["en"], "license": "cc-by-sa-4.0", "tags": ["music"]}
|
2023-10-11T01:23:34+00:00
|
[] |
[
"en"
] |
TAGS
#language-English #license-cc-by-sa-4.0 #music #region-us
|
* HuggingfaceのAPIを利用したloading Scriptを試すための練習リポジトリです。
* データの内容は、<a href="URL>とほぼ同じです
|
[] |
[
"TAGS\n#language-English #license-cc-by-sa-4.0 #music #region-us \n"
] |
[
23
] |
[
"passage: TAGS\n#language-English #license-cc-by-sa-4.0 #music #region-us \n"
] |
f7c3abd58b2a2e591332ee5b8cb1449acbd1ff68
|
# Dataset Card for "finetuning_llama2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ravivishwakarmauzio/finetuning_llama2
|
[
"region:us"
] |
2023-10-10T06:52:35+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 338808, "num_examples": 200}], "download_size": 0, "dataset_size": 338808}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-10T08:09:07+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "finetuning_llama2"
More Information needed
|
[
"# Dataset Card for \"finetuning_llama2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"finetuning_llama2\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"finetuning_llama2\"\n\nMore Information needed"
] |
7e5d59edc114fd0dd3edb25d85390862a1372f58
|
# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-timedial
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/Charlie911/vicuna-7b-v1.5-lora-timedial
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [Charlie911/vicuna-7b-v1.5-lora-timedial](https://huggingface.co/Charlie911/vicuna-7b-v1.5-lora-timedial) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-timedial",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-25T02:58:22.436019](https://huggingface.co/datasets/open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-timedial/blob/main/results_2023-10-25T02-58-22.436019.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.004299496644295302,
"em_stderr": 0.0006700586558629934,
"f1": 0.06840499161073847,
"f1_stderr": 0.001566173833045158,
"acc": 0.40418915336712596,
"acc_stderr": 0.009775164829075637
},
"harness|drop|3": {
"em": 0.004299496644295302,
"em_stderr": 0.0006700586558629934,
"f1": 0.06840499161073847,
"f1_stderr": 0.001566173833045158
},
"harness|gsm8k|5": {
"acc": 0.07278241091736164,
"acc_stderr": 0.007155604761167476
},
"harness|winogrande|5": {
"acc": 0.7355958958168903,
"acc_stderr": 0.012394724896983797
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-timedial
|
[
"region:us"
] |
2023-10-10T07:03:52+00:00
|
{"pretty_name": "Evaluation run of Charlie911/vicuna-7b-v1.5-lora-timedial", "dataset_summary": "Dataset automatically created during the evaluation run of model [Charlie911/vicuna-7b-v1.5-lora-timedial](https://huggingface.co/Charlie911/vicuna-7b-v1.5-lora-timedial) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-timedial\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-25T02:58:22.436019](https://huggingface.co/datasets/open-llm-leaderboard/details_Charlie911__vicuna-7b-v1.5-lora-timedial/blob/main/results_2023-10-25T02-58-22.436019.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.004299496644295302,\n \"em_stderr\": 0.0006700586558629934,\n \"f1\": 0.06840499161073847,\n \"f1_stderr\": 0.001566173833045158,\n \"acc\": 0.40418915336712596,\n \"acc_stderr\": 0.009775164829075637\n },\n \"harness|drop|3\": {\n \"em\": 0.004299496644295302,\n \"em_stderr\": 0.0006700586558629934,\n \"f1\": 0.06840499161073847,\n \"f1_stderr\": 0.001566173833045158\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.07278241091736164,\n \"acc_stderr\": 0.007155604761167476\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7355958958168903,\n \"acc_stderr\": 0.012394724896983797\n }\n}\n```", "repo_url": "https://huggingface.co/Charlie911/vicuna-7b-v1.5-lora-timedial", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|arc:challenge|25_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_25T02_58_22.436019", "path": ["**/details_harness|drop|3_2023-10-25T02-58-22.436019.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-25T02-58-22.436019.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_25T02_58_22.436019", "path": ["**/details_harness|gsm8k|5_2023-10-25T02-58-22.436019.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-25T02-58-22.436019.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hellaswag|10_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T08-03-27.841263.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T08-03-27.841263.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T08-03-27.841263.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_25T02_58_22.436019", "path": ["**/details_harness|winogrande|5_2023-10-25T02-58-22.436019.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-25T02-58-22.436019.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_10T08_03_27.841263", "path": ["results_2023-10-10T08-03-27.841263.parquet"]}, {"split": "2023_10_25T02_58_22.436019", "path": ["results_2023-10-25T02-58-22.436019.parquet"]}, {"split": "latest", "path": ["results_2023-10-25T02-58-22.436019.parquet"]}]}]}
|
2023-10-25T01:58:34+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-timedial
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model Charlie911/vicuna-7b-v1.5-lora-timedial on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-25T02:58:22.436019(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-timedial",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Charlie911/vicuna-7b-v1.5-lora-timedial on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-25T02:58:22.436019(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-timedial",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model Charlie911/vicuna-7b-v1.5-lora-timedial on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-25T02:58:22.436019(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
26,
31,
174,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Charlie911/vicuna-7b-v1.5-lora-timedial## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model Charlie911/vicuna-7b-v1.5-lora-timedial on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-25T02:58:22.436019(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
68c5e0884fa7c0c3ccdd689d4d57dc3b8a1a39d0
|
# Dataset Card for Evaluation run of JosephusCheung/Pwen-VL-Chat-20_30
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/JosephusCheung/Pwen-VL-Chat-20_30
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [JosephusCheung/Pwen-VL-Chat-20_30](https://huggingface.co/JosephusCheung/Pwen-VL-Chat-20_30) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_JosephusCheung__Pwen-VL-Chat-20_30_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-06T13:45:28.201357](https://huggingface.co/datasets/open-llm-leaderboard/details_JosephusCheung__Pwen-VL-Chat-20_30_public/blob/main/results_2023-11-06T13-45-28.201357.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.3234060402684564,
"em_stderr": 0.004790466119380845,
"f1": 0.3795564177852361,
"f1_stderr": 0.004705234681743664,
"acc": 0.4372790848104894,
"acc_stderr": 0.011950329759903327
},
"harness|drop|3": {
"em": 0.3234060402684564,
"em_stderr": 0.004790466119380845,
"f1": 0.3795564177852361,
"f1_stderr": 0.004705234681743664
},
"harness|gsm8k|5": {
"acc": 0.1910538286580743,
"acc_stderr": 0.010828791191755175
},
"harness|winogrande|5": {
"acc": 0.6835043409629045,
"acc_stderr": 0.013071868328051477
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_JosephusCheung__Pwen-VL-Chat-20_30
|
[
"region:us"
] |
2023-10-10T07:17:43+00:00
|
{"pretty_name": "Evaluation run of JosephusCheung/Pwen-VL-Chat-20_30", "dataset_summary": "Dataset automatically created during the evaluation run of model [JosephusCheung/Pwen-VL-Chat-20_30](https://huggingface.co/JosephusCheung/Pwen-VL-Chat-20_30) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_JosephusCheung__Pwen-VL-Chat-20_30_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-06T13:45:28.201357](https://huggingface.co/datasets/open-llm-leaderboard/details_JosephusCheung__Pwen-VL-Chat-20_30_public/blob/main/results_2023-11-06T13-45-28.201357.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.3234060402684564,\n \"em_stderr\": 0.004790466119380845,\n \"f1\": 0.3795564177852361,\n \"f1_stderr\": 0.004705234681743664,\n \"acc\": 0.4372790848104894,\n \"acc_stderr\": 0.011950329759903327\n },\n \"harness|drop|3\": {\n \"em\": 0.3234060402684564,\n \"em_stderr\": 0.004790466119380845,\n \"f1\": 0.3795564177852361,\n \"f1_stderr\": 0.004705234681743664\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.1910538286580743,\n \"acc_stderr\": 0.010828791191755175\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.6835043409629045,\n \"acc_stderr\": 0.013071868328051477\n }\n}\n```", "repo_url": "https://huggingface.co/JosephusCheung/Pwen-VL-Chat-20_30", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_04T15_47_38.506264", "path": ["**/details_harness|drop|3_2023-11-04T15-47-38.506264.parquet"]}, {"split": "2023_11_06T13_45_28.201357", "path": ["**/details_harness|drop|3_2023-11-06T13-45-28.201357.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-06T13-45-28.201357.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_04T15_47_38.506264", "path": ["**/details_harness|gsm8k|5_2023-11-04T15-47-38.506264.parquet"]}, {"split": "2023_11_06T13_45_28.201357", "path": ["**/details_harness|gsm8k|5_2023-11-06T13-45-28.201357.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-06T13-45-28.201357.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_04T15_47_38.506264", "path": ["**/details_harness|winogrande|5_2023-11-04T15-47-38.506264.parquet"]}, {"split": "2023_11_06T13_45_28.201357", "path": ["**/details_harness|winogrande|5_2023-11-06T13-45-28.201357.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-06T13-45-28.201357.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_04T15_47_38.506264", "path": ["results_2023-11-04T15-47-38.506264.parquet"]}, {"split": "2023_11_06T13_45_28.201357", "path": ["results_2023-11-06T13-45-28.201357.parquet"]}, {"split": "latest", "path": ["results_2023-11-06T13-45-28.201357.parquet"]}]}]}
|
2023-12-01T14:01:28+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of JosephusCheung/Pwen-VL-Chat-20_30
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model JosephusCheung/Pwen-VL-Chat-20_30 on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-06T13:45:28.201357(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of JosephusCheung/Pwen-VL-Chat-20_30",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model JosephusCheung/Pwen-VL-Chat-20_30 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-06T13:45:28.201357(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of JosephusCheung/Pwen-VL-Chat-20_30",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model JosephusCheung/Pwen-VL-Chat-20_30 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-06T13:45:28.201357(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
23,
31,
172,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of JosephusCheung/Pwen-VL-Chat-20_30## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model JosephusCheung/Pwen-VL-Chat-20_30 on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-06T13:45:28.201357(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
a8a87d50f917ab13cada6a43eafed3c2ef950638
|
# Dataset Card for "synpre_mix_v1_1M"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tyzhu/synpre_mix_v1_1M
|
[
"region:us"
] |
2023-10-10T07:25:44+00:00
|
{"dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1480500977.0, "num_examples": 1000000}, {"name": "validation", "num_bytes": 14857855.5, "num_examples": 10000}], "download_size": 976476075, "dataset_size": 1495358832.5}}
|
2023-10-10T07:28:46+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "synpre_mix_v1_1M"
More Information needed
|
[
"# Dataset Card for \"synpre_mix_v1_1M\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"synpre_mix_v1_1M\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"synpre_mix_v1_1M\"\n\nMore Information needed"
] |
d0fafc014bd1b5ee7b60f3b5d25f05e311a16dd4
|
# Dataset Card for "synpre_mix_v2_1M"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tyzhu/synpre_mix_v2_1M
|
[
"region:us"
] |
2023-10-10T07:30:03+00:00
|
{"dataset_info": {"features": [{"name": "inputs", "dtype": "string"}, {"name": "targets", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1526179456.3333333, "num_examples": 1000000}, {"name": "validation", "num_bytes": 15281524.666666666, "num_examples": 10000}], "download_size": 1092482565, "dataset_size": 1541460981.0}}
|
2023-10-10T07:33:35+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "synpre_mix_v2_1M"
More Information needed
|
[
"# Dataset Card for \"synpre_mix_v2_1M\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"synpre_mix_v2_1M\"\n\nMore Information needed"
] |
[
6,
19
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"synpre_mix_v2_1M\"\n\nMore Information needed"
] |
dff7a5c3af9f5d1602adc2e2be92dd55af09267f
|
# Dataset Card for Evaluation run of OpenBuddy/openbuddy-mistral-7b-v13
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/OpenBuddy/openbuddy-mistral-7b-v13
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [OpenBuddy/openbuddy-mistral-7b-v13](https://huggingface.co/OpenBuddy/openbuddy-mistral-7b-v13) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_OpenBuddy__openbuddy-mistral-7b-v13",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-24T22:04:44.332803](https://huggingface.co/datasets/open-llm-leaderboard/details_OpenBuddy__openbuddy-mistral-7b-v13/blob/main/results_2023-10-24T22-04-44.332803.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.29750419463087246,
"em_stderr": 0.004681748345750226,
"f1": 0.3555442533557056,
"f1_stderr": 0.004616201496073195,
"acc": 0.4322619501392136,
"acc_stderr": 0.011205063255665634
},
"harness|drop|3": {
"em": 0.29750419463087246,
"em_stderr": 0.004681748345750226,
"f1": 0.3555442533557056,
"f1_stderr": 0.004616201496073195
},
"harness|gsm8k|5": {
"acc": 0.1470811220621683,
"acc_stderr": 0.00975606366035987
},
"harness|winogrande|5": {
"acc": 0.7174427782162589,
"acc_stderr": 0.012654062850971398
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_OpenBuddy__openbuddy-mistral-7b-v13
|
[
"region:us"
] |
2023-10-10T07:32:31+00:00
|
{"pretty_name": "Evaluation run of OpenBuddy/openbuddy-mistral-7b-v13", "dataset_summary": "Dataset automatically created during the evaluation run of model [OpenBuddy/openbuddy-mistral-7b-v13](https://huggingface.co/OpenBuddy/openbuddy-mistral-7b-v13) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_OpenBuddy__openbuddy-mistral-7b-v13\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-24T22:04:44.332803](https://huggingface.co/datasets/open-llm-leaderboard/details_OpenBuddy__openbuddy-mistral-7b-v13/blob/main/results_2023-10-24T22-04-44.332803.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.29750419463087246,\n \"em_stderr\": 0.004681748345750226,\n \"f1\": 0.3555442533557056,\n \"f1_stderr\": 0.004616201496073195,\n \"acc\": 0.4322619501392136,\n \"acc_stderr\": 0.011205063255665634\n },\n \"harness|drop|3\": {\n \"em\": 0.29750419463087246,\n \"em_stderr\": 0.004681748345750226,\n \"f1\": 0.3555442533557056,\n \"f1_stderr\": 0.004616201496073195\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.1470811220621683,\n \"acc_stderr\": 0.00975606366035987\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7174427782162589,\n \"acc_stderr\": 0.012654062850971398\n }\n}\n```", "repo_url": "https://huggingface.co/OpenBuddy/openbuddy-mistral-7b-v13", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|arc:challenge|25_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_24T22_04_44.332803", "path": ["**/details_harness|drop|3_2023-10-24T22-04-44.332803.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-24T22-04-44.332803.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_24T22_04_44.332803", "path": ["**/details_harness|gsm8k|5_2023-10-24T22-04-44.332803.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-24T22-04-44.332803.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hellaswag|10_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T08-32-08.394718.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T08-32-08.394718.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T08-32-08.394718.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_24T22_04_44.332803", "path": ["**/details_harness|winogrande|5_2023-10-24T22-04-44.332803.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-24T22-04-44.332803.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_10T08_32_08.394718", "path": ["results_2023-10-10T08-32-08.394718.parquet"]}, {"split": "2023_10_24T22_04_44.332803", "path": ["results_2023-10-24T22-04-44.332803.parquet"]}, {"split": "latest", "path": ["results_2023-10-24T22-04-44.332803.parquet"]}]}]}
|
2023-10-24T21:04:58+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of OpenBuddy/openbuddy-mistral-7b-v13
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model OpenBuddy/openbuddy-mistral-7b-v13 on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-24T22:04:44.332803(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of OpenBuddy/openbuddy-mistral-7b-v13",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model OpenBuddy/openbuddy-mistral-7b-v13 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-24T22:04:44.332803(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of OpenBuddy/openbuddy-mistral-7b-v13",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model OpenBuddy/openbuddy-mistral-7b-v13 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-24T22:04:44.332803(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
24,
31,
172,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of OpenBuddy/openbuddy-mistral-7b-v13## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model OpenBuddy/openbuddy-mistral-7b-v13 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-24T22:04:44.332803(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
32d28a621d76c922299d8d032636859c31fe6e6e
|
# Dataset Card for "eCommerceQuery"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
danieletdg/eCommerceQuery
|
[
"region:us"
] |
2023-10-10T07:38:19+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "entities", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 398956, "num_examples": 3994}, {"name": "test", "num_bytes": 1597728, "num_examples": 15980}], "download_size": 1007526, "dataset_size": 1996684}}
|
2023-10-10T07:38:25+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "eCommerceQuery"
More Information needed
|
[
"# Dataset Card for \"eCommerceQuery\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"eCommerceQuery\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"eCommerceQuery\"\n\nMore Information needed"
] |
6ff1e637d1e6eba1ad5f1663bd33cff4136a0c95
|
The mini-math23k-v1 dataset is composed of ~ 23,000 entries of data, from open datasets across the AI landscape, including:
- [TIGER-Lab/MathInstruct](https://huggingface.co/datasets/TIGER-Lab/MathInstruct)
- [Birchlabs/openai-prm800k-solutions-only](https://huggingface.co/datasets/Birchlabs/openai-prm800k-solutions-only)
Credits:
```
Birchlabs
```
```
@article{yue2023mammoth,
title={MAmmoTH: Building Math Generalist Models through Hybrid Instruction Tuning},
author={Xiang Yue, Xingwei Qu, Ge Zhang, Yao Fu, Wenhao Huang, Huan Sun, Yu Su, Wenhu Chen},
journal={arXiv preprint arXiv:2309.05653},
year={2023}
}
```
|
aloobun/mini-math23k-v1
|
[
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"license:mit",
"region:us"
] |
2023-10-10T07:54:00+00:00
|
{"language": ["en"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "pretty_name": "math"}
|
2023-10-10T11:40:42+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-text-generation #size_categories-10K<n<100K #language-English #license-mit #region-us
|
The mini-math23k-v1 dataset is composed of ~ 23,000 entries of data, from open datasets across the AI landscape, including:
- TIGER-Lab/MathInstruct
- Birchlabs/openai-prm800k-solutions-only
Credits:
|
[] |
[
"TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-English #license-mit #region-us \n"
] |
[
38
] |
[
"passage: TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-English #license-mit #region-us \n"
] |
6e7d1330cafd54d51dca0dbab2224f64a6f275b4
|
# Dad Jokes dataset
This dataset is generated from the [Kaggle Reddit Dad Jokes](https://www.kaggle.com/datasets/oktayozturk010/reddit-dad-jokes) by [Oktay Ozturk](https://www.kaggle.com/oktayozturk010), with the following modifications:
* Only jokes with 5+ votes were sampled. Less upvoted jokes are too cringe.
* With a set of heuristics, each joke was split into two parts: base and the punchline.
## Format
The dataset is formatted as a CSV, and is split into train/test parts:
* train: 52000 samples
* test: 1400 samples
```csv
"question","response"
"I asked my priest how he gets holy water","He said it’s just regular water, he just boils the hell out of it"
"Life Hack: If you play My Chemical Romance loud enough in your yard","your grass will cut itself"
"Why did Mr. Potato Head get pulled over","He was baked"
"How did the Mexican John Wick taste his Burrito","He took Juan Lick"
```
## Usage
With a base/punchline split, this dataset can be used for a joke prediction task with any LLM.
## License
Apache 2.0.
|
shuttie/dadjokes
|
[
"size_categories:10K<n<100K",
"language:en",
"license:apache-2.0",
"region:us"
] |
2023-10-10T08:27:55+00:00
|
{"language": ["en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"]}
|
2023-10-10T08:40:50+00:00
|
[] |
[
"en"
] |
TAGS
#size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us
|
# Dad Jokes dataset
This dataset is generated from the Kaggle Reddit Dad Jokes by Oktay Ozturk, with the following modifications:
* Only jokes with 5+ votes were sampled. Less upvoted jokes are too cringe.
* With a set of heuristics, each joke was split into two parts: base and the punchline.
## Format
The dataset is formatted as a CSV, and is split into train/test parts:
* train: 52000 samples
* test: 1400 samples
## Usage
With a base/punchline split, this dataset can be used for a joke prediction task with any LLM.
## License
Apache 2.0.
|
[
"# Dad Jokes dataset\n\nThis dataset is generated from the Kaggle Reddit Dad Jokes by Oktay Ozturk, with the following modifications:\n\n* Only jokes with 5+ votes were sampled. Less upvoted jokes are too cringe.\n* With a set of heuristics, each joke was split into two parts: base and the punchline.",
"## Format\n\nThe dataset is formatted as a CSV, and is split into train/test parts:\n\n* train: 52000 samples\n* test: 1400 samples",
"## Usage\n\nWith a base/punchline split, this dataset can be used for a joke prediction task with any LLM.",
"## License\n\nApache 2.0."
] |
[
"TAGS\n#size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us \n",
"# Dad Jokes dataset\n\nThis dataset is generated from the Kaggle Reddit Dad Jokes by Oktay Ozturk, with the following modifications:\n\n* Only jokes with 5+ votes were sampled. Less upvoted jokes are too cringe.\n* With a set of heuristics, each joke was split into two parts: base and the punchline.",
"## Format\n\nThe dataset is formatted as a CSV, and is split into train/test parts:\n\n* train: 52000 samples\n* test: 1400 samples",
"## Usage\n\nWith a base/punchline split, this dataset can be used for a joke prediction task with any LLM.",
"## License\n\nApache 2.0."
] |
[
30,
81,
35,
29,
6
] |
[
"passage: TAGS\n#size_categories-10K<n<100K #language-English #license-apache-2.0 #region-us \n# Dad Jokes dataset\n\nThis dataset is generated from the Kaggle Reddit Dad Jokes by Oktay Ozturk, with the following modifications:\n\n* Only jokes with 5+ votes were sampled. Less upvoted jokes are too cringe.\n* With a set of heuristics, each joke was split into two parts: base and the punchline.## Format\n\nThe dataset is formatted as a CSV, and is split into train/test parts:\n\n* train: 52000 samples\n* test: 1400 samples## Usage\n\nWith a base/punchline split, this dataset can be used for a joke prediction task with any LLM.## License\n\nApache 2.0."
] |
ddebfaf95fe12d2c9314c9fd336c0c9e5e34a96e
|
# Dataset Card for "EsportLogosV2_processed_V2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
temasarkisov/EsportLogosV2_processed_V2
|
[
"region:us"
] |
2023-10-10T08:28:53+00:00
|
{"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 4561815.0, "num_examples": 73}], "download_size": 4560462, "dataset_size": 4561815.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-10T08:28:56+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "EsportLogosV2_processed_V2"
More Information needed
|
[
"# Dataset Card for \"EsportLogosV2_processed_V2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"EsportLogosV2_processed_V2\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"EsportLogosV2_processed_V2\"\n\nMore Information needed"
] |
cfd3b6cc87706d5932f9b7e2c22970347acdfdce
|
# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE4_3.8w-r16-gate_up_down-test1
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE4_3.8w-r16-gate_up_down-test1
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [CHIH-HUNG/llama-2-13b-FINETUNE4_3.8w-r16-gate_up_down-test1](https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE4_3.8w-r16-gate_up_down-test1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE4_3.8w-r16-gate_up_down-test1",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-25T21:38:01.231208](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE4_3.8w-r16-gate_up_down-test1/blob/main/results_2023-10-25T21-38-01.231208.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.22818791946308725,
"em_stderr": 0.00429775606227976,
"f1": 0.2705872483221472,
"f1_stderr": 0.004287875673448546,
"acc": 0.45044049897886096,
"acc_stderr": 0.010454670771991827
},
"harness|drop|3": {
"em": 0.22818791946308725,
"em_stderr": 0.00429775606227976,
"f1": 0.2705872483221472,
"f1_stderr": 0.004287875673448546
},
"harness|gsm8k|5": {
"acc": 0.12661106899166036,
"acc_stderr": 0.009159715283081099
},
"harness|winogrande|5": {
"acc": 0.7742699289660616,
"acc_stderr": 0.011749626260902557
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE4_3.8w-r16-gate_up_down-test1
|
[
"region:us"
] |
2023-10-10T08:30:57+00:00
|
{"pretty_name": "Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE4_3.8w-r16-gate_up_down-test1", "dataset_summary": "Dataset automatically created during the evaluation run of model [CHIH-HUNG/llama-2-13b-FINETUNE4_3.8w-r16-gate_up_down-test1](https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE4_3.8w-r16-gate_up_down-test1) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE4_3.8w-r16-gate_up_down-test1\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-25T21:38:01.231208](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE4_3.8w-r16-gate_up_down-test1/blob/main/results_2023-10-25T21-38-01.231208.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.22818791946308725,\n \"em_stderr\": 0.00429775606227976,\n \"f1\": 0.2705872483221472,\n \"f1_stderr\": 0.004287875673448546,\n \"acc\": 0.45044049897886096,\n \"acc_stderr\": 0.010454670771991827\n },\n \"harness|drop|3\": {\n \"em\": 0.22818791946308725,\n \"em_stderr\": 0.00429775606227976,\n \"f1\": 0.2705872483221472,\n \"f1_stderr\": 0.004287875673448546\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.12661106899166036,\n \"acc_stderr\": 0.009159715283081099\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7742699289660616,\n \"acc_stderr\": 0.011749626260902557\n }\n}\n```", "repo_url": "https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE4_3.8w-r16-gate_up_down-test1", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|arc:challenge|25_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_25T21_38_01.231208", "path": ["**/details_harness|drop|3_2023-10-25T21-38-01.231208.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-25T21-38-01.231208.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_25T21_38_01.231208", "path": ["**/details_harness|gsm8k|5_2023-10-25T21-38-01.231208.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-25T21-38-01.231208.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hellaswag|10_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T09-30-33.515075.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T09-30-33.515075.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T09-30-33.515075.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_25T21_38_01.231208", "path": ["**/details_harness|winogrande|5_2023-10-25T21-38-01.231208.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-25T21-38-01.231208.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_10T09_30_33.515075", "path": ["results_2023-10-10T09-30-33.515075.parquet"]}, {"split": "2023_10_25T21_38_01.231208", "path": ["results_2023-10-25T21-38-01.231208.parquet"]}, {"split": "latest", "path": ["results_2023-10-25T21-38-01.231208.parquet"]}]}]}
|
2023-10-25T20:38:13+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE4_3.8w-r16-gate_up_down-test1
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE4_3.8w-r16-gate_up_down-test1 on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-25T21:38:01.231208(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE4_3.8w-r16-gate_up_down-test1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE4_3.8w-r16-gate_up_down-test1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-25T21:38:01.231208(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE4_3.8w-r16-gate_up_down-test1",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE4_3.8w-r16-gate_up_down-test1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-25T21:38:01.231208(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
40,
31,
188,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE4_3.8w-r16-gate_up_down-test1## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE4_3.8w-r16-gate_up_down-test1 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-25T21:38:01.231208(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
0860e89789f644328fbb4c88e1c0d3c63f8344d5
|
# Dataset Card for Evaluation run of uukuguy/speechless-code-mistral-7b-v1.0
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/uukuguy/speechless-code-mistral-7b-v1.0
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [uukuguy/speechless-code-mistral-7b-v1.0](https://huggingface.co/uukuguy/speechless-code-mistral-7b-v1.0) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_uukuguy__speechless-code-mistral-7b-v1.0",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-08T00:30:08.150409](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__speechless-code-mistral-7b-v1.0/blob/main/results_2023-12-08T00-30-08.150409.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6328087510139756,
"acc_stderr": 0.03241662679481022,
"acc_norm": 0.6370452055948831,
"acc_norm_stderr": 0.033063806342390464,
"mc1": 0.33414932680538556,
"mc1_stderr": 0.016512530677150538,
"mc2": 0.4789652694280945,
"mc2_stderr": 0.01485651327015513
},
"harness|arc:challenge|25": {
"acc": 0.5708191126279863,
"acc_stderr": 0.014464085894870655,
"acc_norm": 0.6117747440273038,
"acc_norm_stderr": 0.014241614207414044
},
"harness|hellaswag|10": {
"acc": 0.6405098585939056,
"acc_stderr": 0.004788703173474743,
"acc_norm": 0.8376817367058355,
"acc_norm_stderr": 0.003679889125399815
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.26,
"acc_stderr": 0.04408440022768079,
"acc_norm": 0.26,
"acc_norm_stderr": 0.04408440022768079
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.562962962962963,
"acc_stderr": 0.042849586397534015,
"acc_norm": 0.562962962962963,
"acc_norm_stderr": 0.042849586397534015
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6842105263157895,
"acc_stderr": 0.03782728980865469,
"acc_norm": 0.6842105263157895,
"acc_norm_stderr": 0.03782728980865469
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.6,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.6,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6792452830188679,
"acc_stderr": 0.028727502957880274,
"acc_norm": 0.6792452830188679,
"acc_norm_stderr": 0.028727502957880274
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7291666666666666,
"acc_stderr": 0.03716177437566017,
"acc_norm": 0.7291666666666666,
"acc_norm_stderr": 0.03716177437566017
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.47,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.47,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6184971098265896,
"acc_stderr": 0.03703851193099521,
"acc_norm": 0.6184971098265896,
"acc_norm_stderr": 0.03703851193099521
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.38235294117647056,
"acc_stderr": 0.04835503696107223,
"acc_norm": 0.38235294117647056,
"acc_norm_stderr": 0.04835503696107223
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.76,
"acc_stderr": 0.042923469599092816,
"acc_norm": 0.76,
"acc_norm_stderr": 0.042923469599092816
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5659574468085107,
"acc_stderr": 0.03240038086792747,
"acc_norm": 0.5659574468085107,
"acc_norm_stderr": 0.03240038086792747
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4473684210526316,
"acc_stderr": 0.04677473004491199,
"acc_norm": 0.4473684210526316,
"acc_norm_stderr": 0.04677473004491199
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5655172413793104,
"acc_stderr": 0.04130740879555497,
"acc_norm": 0.5655172413793104,
"acc_norm_stderr": 0.04130740879555497
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4074074074074074,
"acc_stderr": 0.02530590624159063,
"acc_norm": 0.4074074074074074,
"acc_norm_stderr": 0.02530590624159063
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4523809523809524,
"acc_stderr": 0.044518079590553275,
"acc_norm": 0.4523809523809524,
"acc_norm_stderr": 0.044518079590553275
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7580645161290323,
"acc_stderr": 0.02436259969303108,
"acc_norm": 0.7580645161290323,
"acc_norm_stderr": 0.02436259969303108
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5123152709359606,
"acc_stderr": 0.035169204442208966,
"acc_norm": 0.5123152709359606,
"acc_norm_stderr": 0.035169204442208966
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7575757575757576,
"acc_stderr": 0.03346409881055953,
"acc_norm": 0.7575757575757576,
"acc_norm_stderr": 0.03346409881055953
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.797979797979798,
"acc_stderr": 0.028606204289229872,
"acc_norm": 0.797979797979798,
"acc_norm_stderr": 0.028606204289229872
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8963730569948186,
"acc_stderr": 0.02199531196364424,
"acc_norm": 0.8963730569948186,
"acc_norm_stderr": 0.02199531196364424
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6538461538461539,
"acc_stderr": 0.024121125416941183,
"acc_norm": 0.6538461538461539,
"acc_norm_stderr": 0.024121125416941183
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.31851851851851853,
"acc_stderr": 0.02840653309060846,
"acc_norm": 0.31851851851851853,
"acc_norm_stderr": 0.02840653309060846
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7142857142857143,
"acc_stderr": 0.029344572500634335,
"acc_norm": 0.7142857142857143,
"acc_norm_stderr": 0.029344572500634335
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.31125827814569534,
"acc_stderr": 0.03780445850526733,
"acc_norm": 0.31125827814569534,
"acc_norm_stderr": 0.03780445850526733
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8293577981651377,
"acc_stderr": 0.016129271025099857,
"acc_norm": 0.8293577981651377,
"acc_norm_stderr": 0.016129271025099857
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5092592592592593,
"acc_stderr": 0.034093869469927006,
"acc_norm": 0.5092592592592593,
"acc_norm_stderr": 0.034093869469927006
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7745098039215687,
"acc_stderr": 0.02933116229425174,
"acc_norm": 0.7745098039215687,
"acc_norm_stderr": 0.02933116229425174
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7679324894514767,
"acc_stderr": 0.027479744550808514,
"acc_norm": 0.7679324894514767,
"acc_norm_stderr": 0.027479744550808514
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.695067264573991,
"acc_stderr": 0.030898610882477518,
"acc_norm": 0.695067264573991,
"acc_norm_stderr": 0.030898610882477518
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7786259541984732,
"acc_stderr": 0.03641297081313729,
"acc_norm": 0.7786259541984732,
"acc_norm_stderr": 0.03641297081313729
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.8099173553719008,
"acc_stderr": 0.03581796951709282,
"acc_norm": 0.8099173553719008,
"acc_norm_stderr": 0.03581796951709282
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.04330043749650741,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.04330043749650741
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7361963190184049,
"acc_stderr": 0.034624199316156234,
"acc_norm": 0.7361963190184049,
"acc_norm_stderr": 0.034624199316156234
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5,
"acc_stderr": 0.04745789978762494,
"acc_norm": 0.5,
"acc_norm_stderr": 0.04745789978762494
},
"harness|hendrycksTest-management|5": {
"acc": 0.7572815533980582,
"acc_stderr": 0.04245022486384495,
"acc_norm": 0.7572815533980582,
"acc_norm_stderr": 0.04245022486384495
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8547008547008547,
"acc_stderr": 0.023086635086841407,
"acc_norm": 0.8547008547008547,
"acc_norm_stderr": 0.023086635086841407
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8173690932311622,
"acc_stderr": 0.013816335389973147,
"acc_norm": 0.8173690932311622,
"acc_norm_stderr": 0.013816335389973147
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7225433526011561,
"acc_stderr": 0.024105712607754307,
"acc_norm": 0.7225433526011561,
"acc_norm_stderr": 0.024105712607754307
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.34301675977653634,
"acc_stderr": 0.015876912673057728,
"acc_norm": 0.34301675977653634,
"acc_norm_stderr": 0.015876912673057728
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.738562091503268,
"acc_stderr": 0.025160998214292456,
"acc_norm": 0.738562091503268,
"acc_norm_stderr": 0.025160998214292456
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7234726688102894,
"acc_stderr": 0.02540383297817961,
"acc_norm": 0.7234726688102894,
"acc_norm_stderr": 0.02540383297817961
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.75,
"acc_stderr": 0.02409347123262133,
"acc_norm": 0.75,
"acc_norm_stderr": 0.02409347123262133
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.4929078014184397,
"acc_stderr": 0.02982449855912901,
"acc_norm": 0.4929078014184397,
"acc_norm_stderr": 0.02982449855912901
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4954367666232073,
"acc_stderr": 0.012769704263117528,
"acc_norm": 0.4954367666232073,
"acc_norm_stderr": 0.012769704263117528
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6838235294117647,
"acc_stderr": 0.028245687391462927,
"acc_norm": 0.6838235294117647,
"acc_norm_stderr": 0.028245687391462927
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6748366013071896,
"acc_stderr": 0.01895088677080631,
"acc_norm": 0.6748366013071896,
"acc_norm_stderr": 0.01895088677080631
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302506,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302506
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7224489795918367,
"acc_stderr": 0.02866685779027465,
"acc_norm": 0.7224489795918367,
"acc_norm_stderr": 0.02866685779027465
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8208955223880597,
"acc_stderr": 0.027113286753111837,
"acc_norm": 0.8208955223880597,
"acc_norm_stderr": 0.027113286753111837
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.86,
"acc_stderr": 0.034873508801977704,
"acc_norm": 0.86,
"acc_norm_stderr": 0.034873508801977704
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5481927710843374,
"acc_stderr": 0.03874371556587953,
"acc_norm": 0.5481927710843374,
"acc_norm_stderr": 0.03874371556587953
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8187134502923976,
"acc_stderr": 0.029547741687640038,
"acc_norm": 0.8187134502923976,
"acc_norm_stderr": 0.029547741687640038
},
"harness|truthfulqa:mc|0": {
"mc1": 0.33414932680538556,
"mc1_stderr": 0.016512530677150538,
"mc2": 0.4789652694280945,
"mc2_stderr": 0.01485651327015513
},
"harness|winogrande|5": {
"acc": 0.7837411207576953,
"acc_stderr": 0.011570614861409348
},
"harness|gsm8k|5": {
"acc": 0.4700530705079606,
"acc_stderr": 0.013747759685444703
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_uukuguy__speechless-code-mistral-7b-v1.0
|
[
"region:us"
] |
2023-10-10T08:36:03+00:00
|
{"pretty_name": "Evaluation run of uukuguy/speechless-code-mistral-7b-v1.0", "dataset_summary": "Dataset automatically created during the evaluation run of model [uukuguy/speechless-code-mistral-7b-v1.0](https://huggingface.co/uukuguy/speechless-code-mistral-7b-v1.0) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_uukuguy__speechless-code-mistral-7b-v1.0\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-08T00:30:08.150409](https://huggingface.co/datasets/open-llm-leaderboard/details_uukuguy__speechless-code-mistral-7b-v1.0/blob/main/results_2023-12-08T00-30-08.150409.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6328087510139756,\n \"acc_stderr\": 0.03241662679481022,\n \"acc_norm\": 0.6370452055948831,\n \"acc_norm_stderr\": 0.033063806342390464,\n \"mc1\": 0.33414932680538556,\n \"mc1_stderr\": 0.016512530677150538,\n \"mc2\": 0.4789652694280945,\n \"mc2_stderr\": 0.01485651327015513\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5708191126279863,\n \"acc_stderr\": 0.014464085894870655,\n \"acc_norm\": 0.6117747440273038,\n \"acc_norm_stderr\": 0.014241614207414044\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6405098585939056,\n \"acc_stderr\": 0.004788703173474743,\n \"acc_norm\": 0.8376817367058355,\n \"acc_norm_stderr\": 0.003679889125399815\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.26,\n \"acc_stderr\": 0.04408440022768079,\n \"acc_norm\": 0.26,\n \"acc_norm_stderr\": 0.04408440022768079\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.562962962962963,\n \"acc_stderr\": 0.042849586397534015,\n \"acc_norm\": 0.562962962962963,\n \"acc_norm_stderr\": 0.042849586397534015\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6842105263157895,\n \"acc_stderr\": 0.03782728980865469,\n \"acc_norm\": 0.6842105263157895,\n \"acc_norm_stderr\": 0.03782728980865469\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.6,\n \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.6,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6792452830188679,\n \"acc_stderr\": 0.028727502957880274,\n \"acc_norm\": 0.6792452830188679,\n \"acc_norm_stderr\": 0.028727502957880274\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7291666666666666,\n \"acc_stderr\": 0.03716177437566017,\n \"acc_norm\": 0.7291666666666666,\n \"acc_norm_stderr\": 0.03716177437566017\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6184971098265896,\n \"acc_stderr\": 0.03703851193099521,\n \"acc_norm\": 0.6184971098265896,\n \"acc_norm_stderr\": 0.03703851193099521\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107223,\n \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107223\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.042923469599092816,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.042923469599092816\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5659574468085107,\n \"acc_stderr\": 0.03240038086792747,\n \"acc_norm\": 0.5659574468085107,\n \"acc_norm_stderr\": 0.03240038086792747\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4473684210526316,\n \"acc_stderr\": 0.04677473004491199,\n \"acc_norm\": 0.4473684210526316,\n \"acc_norm_stderr\": 0.04677473004491199\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5655172413793104,\n \"acc_stderr\": 0.04130740879555497,\n \"acc_norm\": 0.5655172413793104,\n \"acc_norm_stderr\": 0.04130740879555497\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4074074074074074,\n \"acc_stderr\": 0.02530590624159063,\n \"acc_norm\": 0.4074074074074074,\n \"acc_norm_stderr\": 0.02530590624159063\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4523809523809524,\n \"acc_stderr\": 0.044518079590553275,\n \"acc_norm\": 0.4523809523809524,\n \"acc_norm_stderr\": 0.044518079590553275\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7580645161290323,\n \"acc_stderr\": 0.02436259969303108,\n \"acc_norm\": 0.7580645161290323,\n \"acc_norm_stderr\": 0.02436259969303108\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5123152709359606,\n \"acc_stderr\": 0.035169204442208966,\n \"acc_norm\": 0.5123152709359606,\n \"acc_norm_stderr\": 0.035169204442208966\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7575757575757576,\n \"acc_stderr\": 0.03346409881055953,\n \"acc_norm\": 0.7575757575757576,\n \"acc_norm_stderr\": 0.03346409881055953\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.797979797979798,\n \"acc_stderr\": 0.028606204289229872,\n \"acc_norm\": 0.797979797979798,\n \"acc_norm_stderr\": 0.028606204289229872\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8963730569948186,\n \"acc_stderr\": 0.02199531196364424,\n \"acc_norm\": 0.8963730569948186,\n \"acc_norm_stderr\": 0.02199531196364424\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6538461538461539,\n \"acc_stderr\": 0.024121125416941183,\n \"acc_norm\": 0.6538461538461539,\n \"acc_norm_stderr\": 0.024121125416941183\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.31851851851851853,\n \"acc_stderr\": 0.02840653309060846,\n \"acc_norm\": 0.31851851851851853,\n \"acc_norm_stderr\": 0.02840653309060846\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.7142857142857143,\n \"acc_stderr\": 0.029344572500634335,\n \"acc_norm\": 0.7142857142857143,\n \"acc_norm_stderr\": 0.029344572500634335\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.31125827814569534,\n \"acc_stderr\": 0.03780445850526733,\n \"acc_norm\": 0.31125827814569534,\n \"acc_norm_stderr\": 0.03780445850526733\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8293577981651377,\n \"acc_stderr\": 0.016129271025099857,\n \"acc_norm\": 0.8293577981651377,\n \"acc_norm_stderr\": 0.016129271025099857\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5092592592592593,\n \"acc_stderr\": 0.034093869469927006,\n \"acc_norm\": 0.5092592592592593,\n \"acc_norm_stderr\": 0.034093869469927006\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7745098039215687,\n \"acc_stderr\": 0.02933116229425174,\n \"acc_norm\": 0.7745098039215687,\n \"acc_norm_stderr\": 0.02933116229425174\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7679324894514767,\n \"acc_stderr\": 0.027479744550808514,\n \"acc_norm\": 0.7679324894514767,\n \"acc_norm_stderr\": 0.027479744550808514\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.695067264573991,\n \"acc_stderr\": 0.030898610882477518,\n \"acc_norm\": 0.695067264573991,\n \"acc_norm_stderr\": 0.030898610882477518\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7786259541984732,\n \"acc_stderr\": 0.03641297081313729,\n \"acc_norm\": 0.7786259541984732,\n \"acc_norm_stderr\": 0.03641297081313729\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.8099173553719008,\n \"acc_stderr\": 0.03581796951709282,\n \"acc_norm\": 0.8099173553719008,\n \"acc_norm_stderr\": 0.03581796951709282\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7222222222222222,\n \"acc_stderr\": 0.04330043749650741,\n \"acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.04330043749650741\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7361963190184049,\n \"acc_stderr\": 0.034624199316156234,\n \"acc_norm\": 0.7361963190184049,\n \"acc_norm_stderr\": 0.034624199316156234\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.04745789978762494,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.04745789978762494\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7572815533980582,\n \"acc_stderr\": 0.04245022486384495,\n \"acc_norm\": 0.7572815533980582,\n \"acc_norm_stderr\": 0.04245022486384495\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8547008547008547,\n \"acc_stderr\": 0.023086635086841407,\n \"acc_norm\": 0.8547008547008547,\n \"acc_norm_stderr\": 0.023086635086841407\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8173690932311622,\n \"acc_stderr\": 0.013816335389973147,\n \"acc_norm\": 0.8173690932311622,\n \"acc_norm_stderr\": 0.013816335389973147\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7225433526011561,\n \"acc_stderr\": 0.024105712607754307,\n \"acc_norm\": 0.7225433526011561,\n \"acc_norm_stderr\": 0.024105712607754307\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.34301675977653634,\n \"acc_stderr\": 0.015876912673057728,\n \"acc_norm\": 0.34301675977653634,\n \"acc_norm_stderr\": 0.015876912673057728\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.738562091503268,\n \"acc_stderr\": 0.025160998214292456,\n \"acc_norm\": 0.738562091503268,\n \"acc_norm_stderr\": 0.025160998214292456\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7234726688102894,\n \"acc_stderr\": 0.02540383297817961,\n \"acc_norm\": 0.7234726688102894,\n \"acc_norm_stderr\": 0.02540383297817961\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.02409347123262133,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.02409347123262133\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.4929078014184397,\n \"acc_stderr\": 0.02982449855912901,\n \"acc_norm\": 0.4929078014184397,\n \"acc_norm_stderr\": 0.02982449855912901\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4954367666232073,\n \"acc_stderr\": 0.012769704263117528,\n \"acc_norm\": 0.4954367666232073,\n \"acc_norm_stderr\": 0.012769704263117528\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6838235294117647,\n \"acc_stderr\": 0.028245687391462927,\n \"acc_norm\": 0.6838235294117647,\n \"acc_norm_stderr\": 0.028245687391462927\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6748366013071896,\n \"acc_stderr\": 0.01895088677080631,\n \"acc_norm\": 0.6748366013071896,\n \"acc_norm_stderr\": 0.01895088677080631\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7224489795918367,\n \"acc_stderr\": 0.02866685779027465,\n \"acc_norm\": 0.7224489795918367,\n \"acc_norm_stderr\": 0.02866685779027465\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8208955223880597,\n \"acc_stderr\": 0.027113286753111837,\n \"acc_norm\": 0.8208955223880597,\n \"acc_norm_stderr\": 0.027113286753111837\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.86,\n \"acc_stderr\": 0.034873508801977704,\n \"acc_norm\": 0.86,\n \"acc_norm_stderr\": 0.034873508801977704\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5481927710843374,\n \"acc_stderr\": 0.03874371556587953,\n \"acc_norm\": 0.5481927710843374,\n \"acc_norm_stderr\": 0.03874371556587953\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8187134502923976,\n \"acc_stderr\": 0.029547741687640038,\n \"acc_norm\": 0.8187134502923976,\n \"acc_norm_stderr\": 0.029547741687640038\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.33414932680538556,\n \"mc1_stderr\": 0.016512530677150538,\n \"mc2\": 0.4789652694280945,\n \"mc2_stderr\": 0.01485651327015513\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7837411207576953,\n \"acc_stderr\": 0.011570614861409348\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.4700530705079606,\n \"acc_stderr\": 0.013747759685444703\n }\n}\n```", "repo_url": "https://huggingface.co/uukuguy/speechless-code-mistral-7b-v1.0", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|arc:challenge|25_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|arc:challenge|25_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_28T16_31_08.459023", "path": ["**/details_harness|drop|3_2023-10-28T16-31-08.459023.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-28T16-31-08.459023.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_28T16_31_08.459023", "path": ["**/details_harness|gsm8k|5_2023-10-28T16-31-08.459023.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|gsm8k|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hellaswag|10_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hellaswag|10_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T09-35-40.611521.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-08T00-30-08.150409.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_28T16_31_08.459023", "path": ["**/details_harness|winogrande|5_2023-10-28T16-31-08.459023.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["**/details_harness|winogrande|5_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-08T00-30-08.150409.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_10T09_35_40.611521", "path": ["results_2023-10-10T09-35-40.611521.parquet"]}, {"split": "2023_10_28T16_31_08.459023", "path": ["results_2023-10-28T16-31-08.459023.parquet"]}, {"split": "2023_12_08T00_30_08.150409", "path": ["results_2023-12-08T00-30-08.150409.parquet"]}, {"split": "latest", "path": ["results_2023-12-08T00-30-08.150409.parquet"]}]}]}
|
2023-12-08T00:34:03+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of uukuguy/speechless-code-mistral-7b-v1.0
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model uukuguy/speechless-code-mistral-7b-v1.0 on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-08T00:30:08.150409(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of uukuguy/speechless-code-mistral-7b-v1.0",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/speechless-code-mistral-7b-v1.0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-08T00:30:08.150409(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of uukuguy/speechless-code-mistral-7b-v1.0",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/speechless-code-mistral-7b-v1.0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-08T00:30:08.150409(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
27,
31,
176,
68,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of uukuguy/speechless-code-mistral-7b-v1.0## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model uukuguy/speechless-code-mistral-7b-v1.0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 3 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-08T00:30:08.150409(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
7bdb48d13622647ce4000dd6752c442e651f9fe9
|
# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE4_addto15k_4.5w-r16-gate_up_down
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE4_addto15k_4.5w-r16-gate_up_down
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [CHIH-HUNG/llama-2-13b-FINETUNE4_addto15k_4.5w-r16-gate_up_down](https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE4_addto15k_4.5w-r16-gate_up_down) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE4_addto15k_4.5w-r16-gate_up_down",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-25T22:16:15.844961](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE4_addto15k_4.5w-r16-gate_up_down/blob/main/results_2023-10-25T22-16-15.844961.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.044148489932885907,
"em_stderr": 0.0021037435299994067,
"f1": 0.09857802013422778,
"f1_stderr": 0.0023775705231284467,
"acc": 0.4617194030779578,
"acc_stderr": 0.010887835734442838
},
"harness|drop|3": {
"em": 0.044148489932885907,
"em_stderr": 0.0021037435299994067,
"f1": 0.09857802013422778,
"f1_stderr": 0.0023775705231284467
},
"harness|gsm8k|5": {
"acc": 0.15390447308567096,
"acc_stderr": 0.009939799304049013
},
"harness|winogrande|5": {
"acc": 0.7695343330702447,
"acc_stderr": 0.011835872164836666
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE4_addto15k_4.5w-r16-gate_up_down
|
[
"region:us"
] |
2023-10-10T08:36:18+00:00
|
{"pretty_name": "Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE4_addto15k_4.5w-r16-gate_up_down", "dataset_summary": "Dataset automatically created during the evaluation run of model [CHIH-HUNG/llama-2-13b-FINETUNE4_addto15k_4.5w-r16-gate_up_down](https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE4_addto15k_4.5w-r16-gate_up_down) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE4_addto15k_4.5w-r16-gate_up_down\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-25T22:16:15.844961](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE4_addto15k_4.5w-r16-gate_up_down/blob/main/results_2023-10-25T22-16-15.844961.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.044148489932885907,\n \"em_stderr\": 0.0021037435299994067,\n \"f1\": 0.09857802013422778,\n \"f1_stderr\": 0.0023775705231284467,\n \"acc\": 0.4617194030779578,\n \"acc_stderr\": 0.010887835734442838\n },\n \"harness|drop|3\": {\n \"em\": 0.044148489932885907,\n \"em_stderr\": 0.0021037435299994067,\n \"f1\": 0.09857802013422778,\n \"f1_stderr\": 0.0023775705231284467\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.15390447308567096,\n \"acc_stderr\": 0.009939799304049013\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7695343330702447,\n \"acc_stderr\": 0.011835872164836666\n }\n}\n```", "repo_url": "https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE4_addto15k_4.5w-r16-gate_up_down", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|arc:challenge|25_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_25T22_16_15.844961", "path": ["**/details_harness|drop|3_2023-10-25T22-16-15.844961.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-25T22-16-15.844961.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_25T22_16_15.844961", "path": ["**/details_harness|gsm8k|5_2023-10-25T22-16-15.844961.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-25T22-16-15.844961.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hellaswag|10_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T09-35-55.043179.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T09-35-55.043179.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T09-35-55.043179.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_25T22_16_15.844961", "path": ["**/details_harness|winogrande|5_2023-10-25T22-16-15.844961.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-25T22-16-15.844961.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_10T09_35_55.043179", "path": ["results_2023-10-10T09-35-55.043179.parquet"]}, {"split": "2023_10_25T22_16_15.844961", "path": ["results_2023-10-25T22-16-15.844961.parquet"]}, {"split": "latest", "path": ["results_2023-10-25T22-16-15.844961.parquet"]}]}]}
|
2023-10-25T21:16:28+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE4_addto15k_4.5w-r16-gate_up_down
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE4_addto15k_4.5w-r16-gate_up_down on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-25T22:16:15.844961(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE4_addto15k_4.5w-r16-gate_up_down",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE4_addto15k_4.5w-r16-gate_up_down on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-25T22:16:15.844961(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE4_addto15k_4.5w-r16-gate_up_down",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE4_addto15k_4.5w-r16-gate_up_down on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-25T22:16:15.844961(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
43,
31,
191,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE4_addto15k_4.5w-r16-gate_up_down## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE4_addto15k_4.5w-r16-gate_up_down on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-25T22:16:15.844961(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
52f0541472c72aa508756f1363626deb7e1d7a89
|
# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE4_compare15k_4.5w-r16-gate_up_down
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE4_compare15k_4.5w-r16-gate_up_down
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [CHIH-HUNG/llama-2-13b-FINETUNE4_compare15k_4.5w-r16-gate_up_down](https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE4_compare15k_4.5w-r16-gate_up_down) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE4_compare15k_4.5w-r16-gate_up_down",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-28T17:14:13.466730](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE4_compare15k_4.5w-r16-gate_up_down/blob/main/results_2023-10-28T17-14-13.466730.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.009123322147651007,
"em_stderr": 0.000973701770554167,
"f1": 0.06916421979865739,
"f1_stderr": 0.00161270274465004,
"acc": 0.43659154378391707,
"acc_stderr": 0.01026195907539337
},
"harness|drop|3": {
"em": 0.009123322147651007,
"em_stderr": 0.000973701770554167,
"f1": 0.06916421979865739,
"f1_stderr": 0.00161270274465004
},
"harness|gsm8k|5": {
"acc": 0.10917361637604246,
"acc_stderr": 0.008590089300511116
},
"harness|winogrande|5": {
"acc": 0.7640094711917916,
"acc_stderr": 0.011933828850275625
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE4_compare15k_4.5w-r16-gate_up_down
|
[
"region:us"
] |
2023-10-10T08:43:07+00:00
|
{"pretty_name": "Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE4_compare15k_4.5w-r16-gate_up_down", "dataset_summary": "Dataset automatically created during the evaluation run of model [CHIH-HUNG/llama-2-13b-FINETUNE4_compare15k_4.5w-r16-gate_up_down](https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE4_compare15k_4.5w-r16-gate_up_down) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE4_compare15k_4.5w-r16-gate_up_down\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-28T17:14:13.466730](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE4_compare15k_4.5w-r16-gate_up_down/blob/main/results_2023-10-28T17-14-13.466730.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.009123322147651007,\n \"em_stderr\": 0.000973701770554167,\n \"f1\": 0.06916421979865739,\n \"f1_stderr\": 0.00161270274465004,\n \"acc\": 0.43659154378391707,\n \"acc_stderr\": 0.01026195907539337\n },\n \"harness|drop|3\": {\n \"em\": 0.009123322147651007,\n \"em_stderr\": 0.000973701770554167,\n \"f1\": 0.06916421979865739,\n \"f1_stderr\": 0.00161270274465004\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.10917361637604246,\n \"acc_stderr\": 0.008590089300511116\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7640094711917916,\n \"acc_stderr\": 0.011933828850275625\n }\n}\n```", "repo_url": "https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE4_compare15k_4.5w-r16-gate_up_down", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|arc:challenge|25_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_28T17_14_13.466730", "path": ["**/details_harness|drop|3_2023-10-28T17-14-13.466730.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-28T17-14-13.466730.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_28T17_14_13.466730", "path": ["**/details_harness|gsm8k|5_2023-10-28T17-14-13.466730.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-28T17-14-13.466730.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hellaswag|10_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T09-42-44.126959.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T09-42-44.126959.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T09-42-44.126959.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_28T17_14_13.466730", "path": ["**/details_harness|winogrande|5_2023-10-28T17-14-13.466730.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-28T17-14-13.466730.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_10T09_42_44.126959", "path": ["results_2023-10-10T09-42-44.126959.parquet"]}, {"split": "2023_10_28T17_14_13.466730", "path": ["results_2023-10-28T17-14-13.466730.parquet"]}, {"split": "latest", "path": ["results_2023-10-28T17-14-13.466730.parquet"]}]}]}
|
2023-10-28T16:14:26+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE4_compare15k_4.5w-r16-gate_up_down
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE4_compare15k_4.5w-r16-gate_up_down on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-28T17:14:13.466730(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE4_compare15k_4.5w-r16-gate_up_down",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE4_compare15k_4.5w-r16-gate_up_down on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-28T17:14:13.466730(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE4_compare15k_4.5w-r16-gate_up_down",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE4_compare15k_4.5w-r16-gate_up_down on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-28T17:14:13.466730(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
42,
31,
190,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE4_compare15k_4.5w-r16-gate_up_down## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE4_compare15k_4.5w-r16-gate_up_down on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-28T17:14:13.466730(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
5b3819a7079547ca3422a884459a4238ce8438cf
|
# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-q_k_v_o
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-q_k_v_o
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-q_k_v_o](https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-q_k_v_o) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r4-q_k_v_o",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-28T03:32:51.454817](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r4-q_k_v_o/blob/main/results_2023-10-28T03-32-51.454817.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.37258808724832215,
"em_stderr": 0.004951428522573584,
"f1": 0.41863255033557134,
"f1_stderr": 0.004838761301543826,
"acc": 0.4445987937813739,
"acc_stderr": 0.010466651540029098
},
"harness|drop|3": {
"em": 0.37258808724832215,
"em_stderr": 0.004951428522573584,
"f1": 0.41863255033557134,
"f1_stderr": 0.004838761301543826
},
"harness|gsm8k|5": {
"acc": 0.12282031842304776,
"acc_stderr": 0.009041108602874675
},
"harness|winogrande|5": {
"acc": 0.7663772691397001,
"acc_stderr": 0.011892194477183524
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r4-q_k_v_o
|
[
"region:us"
] |
2023-10-10T08:49:16+00:00
|
{"pretty_name": "Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-q_k_v_o", "dataset_summary": "Dataset automatically created during the evaluation run of model [CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-q_k_v_o](https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-q_k_v_o) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r4-q_k_v_o\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-28T03:32:51.454817](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r4-q_k_v_o/blob/main/results_2023-10-28T03-32-51.454817.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.37258808724832215,\n \"em_stderr\": 0.004951428522573584,\n \"f1\": 0.41863255033557134,\n \"f1_stderr\": 0.004838761301543826,\n \"acc\": 0.4445987937813739,\n \"acc_stderr\": 0.010466651540029098\n },\n \"harness|drop|3\": {\n \"em\": 0.37258808724832215,\n \"em_stderr\": 0.004951428522573584,\n \"f1\": 0.41863255033557134,\n \"f1_stderr\": 0.004838761301543826\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.12282031842304776,\n \"acc_stderr\": 0.009041108602874675\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7663772691397001,\n \"acc_stderr\": 0.011892194477183524\n }\n}\n```", "repo_url": "https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-q_k_v_o", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|arc:challenge|25_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_28T03_32_51.454817", "path": ["**/details_harness|drop|3_2023-10-28T03-32-51.454817.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-28T03-32-51.454817.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_28T03_32_51.454817", "path": ["**/details_harness|gsm8k|5_2023-10-28T03-32-51.454817.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-28T03-32-51.454817.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hellaswag|10_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T09-48-52.263585.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T09-48-52.263585.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T09-48-52.263585.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_28T03_32_51.454817", "path": ["**/details_harness|winogrande|5_2023-10-28T03-32-51.454817.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-28T03-32-51.454817.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_10T09_48_52.263585", "path": ["results_2023-10-10T09-48-52.263585.parquet"]}, {"split": "2023_10_28T03_32_51.454817", "path": ["results_2023-10-28T03-32-51.454817.parquet"]}, {"split": "latest", "path": ["results_2023-10-28T03-32-51.454817.parquet"]}]}]}
|
2023-10-28T02:33:04+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-q_k_v_o
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-q_k_v_o on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-28T03:32:51.454817(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-q_k_v_o",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-q_k_v_o on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-28T03:32:51.454817(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-q_k_v_o",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-q_k_v_o on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-28T03:32:51.454817(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
38,
31,
186,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-q_k_v_o## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-q_k_v_o on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-28T03:32:51.454817(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
6c2f6928424a4105ea48e83aae28c8298373c14c
|
# Dataset Card for "pooling_net_embeddings_dim_16_masked_dataset_1p"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
johannes-garstenauer/pooling_net_embeddings_dim_16_masked_dataset_1p
|
[
"region:us"
] |
2023-10-10T08:50:52+00:00
|
{"dataset_info": {"features": [{"name": "last_hs", "sequence": "float32"}, {"name": "label", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 51148, "num_examples": 673}], "download_size": 61004, "dataset_size": 51148}}
|
2023-10-10T08:50:55+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "pooling_net_embeddings_dim_16_masked_dataset_1p"
More Information needed
|
[
"# Dataset Card for \"pooling_net_embeddings_dim_16_masked_dataset_1p\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"pooling_net_embeddings_dim_16_masked_dataset_1p\"\n\nMore Information needed"
] |
[
6,
30
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"pooling_net_embeddings_dim_16_masked_dataset_1p\"\n\nMore Information needed"
] |
4f70e1d7a8bf6b85b6c64455b1afc6f76af6e33b
|
# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o](https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r8-q_k_v_o_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-06T15:43:11.163444](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r8-q_k_v_o_public/blob/main/results_2023-11-06T15-43-11.163444.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.37458053691275167,
"em_stderr": 0.004956760684602152,
"f1": 0.41704173657718185,
"f1_stderr": 0.004847488019820457,
"acc": 0.45805311598499976,
"acc_stderr": 0.010642754511101384
},
"harness|drop|3": {
"em": 0.37458053691275167,
"em_stderr": 0.004956760684602152,
"f1": 0.41704173657718185,
"f1_stderr": 0.004847488019820457
},
"harness|gsm8k|5": {
"acc": 0.14025777103866566,
"acc_stderr": 0.009565108281428666
},
"harness|winogrande|5": {
"acc": 0.7758484609313339,
"acc_stderr": 0.011720400740774104
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r8-q_k_v_o
|
[
"region:us"
] |
2023-10-10T08:56:03+00:00
|
{"pretty_name": "Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o", "dataset_summary": "Dataset automatically created during the evaluation run of model [CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o](https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r8-q_k_v_o_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-06T15:43:11.163444](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r8-q_k_v_o_public/blob/main/results_2023-11-06T15-43-11.163444.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.37458053691275167,\n \"em_stderr\": 0.004956760684602152,\n \"f1\": 0.41704173657718185,\n \"f1_stderr\": 0.004847488019820457,\n \"acc\": 0.45805311598499976,\n \"acc_stderr\": 0.010642754511101384\n },\n \"harness|drop|3\": {\n \"em\": 0.37458053691275167,\n \"em_stderr\": 0.004956760684602152,\n \"f1\": 0.41704173657718185,\n \"f1_stderr\": 0.004847488019820457\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.14025777103866566,\n \"acc_stderr\": 0.009565108281428666\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7758484609313339,\n \"acc_stderr\": 0.011720400740774104\n }\n}\n```", "repo_url": "https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_05T07_54_10.919689", "path": ["**/details_harness|drop|3_2023-11-05T07-54-10.919689.parquet"]}, {"split": "2023_11_06T15_43_11.163444", "path": ["**/details_harness|drop|3_2023-11-06T15-43-11.163444.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-06T15-43-11.163444.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_05T07_54_10.919689", "path": ["**/details_harness|gsm8k|5_2023-11-05T07-54-10.919689.parquet"]}, {"split": "2023_11_06T15_43_11.163444", "path": ["**/details_harness|gsm8k|5_2023-11-06T15-43-11.163444.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-06T15-43-11.163444.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_05T07_54_10.919689", "path": ["**/details_harness|winogrande|5_2023-11-05T07-54-10.919689.parquet"]}, {"split": "2023_11_06T15_43_11.163444", "path": ["**/details_harness|winogrande|5_2023-11-06T15-43-11.163444.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-06T15-43-11.163444.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_05T07_54_10.919689", "path": ["results_2023-11-05T07-54-10.919689.parquet"]}, {"split": "2023_11_06T15_43_11.163444", "path": ["results_2023-11-06T15-43-11.163444.parquet"]}, {"split": "latest", "path": ["results_2023-11-06T15-43-11.163444.parquet"]}]}]}
|
2023-12-01T14:26:11+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-06T15:43:11.163444(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-06T15:43:11.163444(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-06T15:43:11.163444(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
39,
31,
188,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-06T15:43:11.163444(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
b145837b17dc8907ca59b9240254bcc56845ad6d
|
# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r16-gate_up_down
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r16-gate_up_down
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r16-gate_up_down](https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r16-gate_up_down) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r16-gate_up_down",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-29T07:06:26.845938](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r16-gate_up_down/blob/main/results_2023-10-29T07-06-26.845938.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.20836828859060402,
"em_stderr": 0.004159269440162747,
"f1": 0.2507906879194633,
"f1_stderr": 0.004162090421371717,
"acc": 0.43807672814244847,
"acc_stderr": 0.01035305451841861
},
"harness|drop|3": {
"em": 0.20836828859060402,
"em_stderr": 0.004159269440162747,
"f1": 0.2507906879194633,
"f1_stderr": 0.004162090421371717
},
"harness|gsm8k|5": {
"acc": 0.11372251705837756,
"acc_stderr": 0.008744810131034056
},
"harness|winogrande|5": {
"acc": 0.7624309392265194,
"acc_stderr": 0.011961298905803166
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r16-gate_up_down
|
[
"region:us"
] |
2023-10-10T09:01:42+00:00
|
{"pretty_name": "Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r16-gate_up_down", "dataset_summary": "Dataset automatically created during the evaluation run of model [CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r16-gate_up_down](https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r16-gate_up_down) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r16-gate_up_down\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-29T07:06:26.845938](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r16-gate_up_down/blob/main/results_2023-10-29T07-06-26.845938.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.20836828859060402,\n \"em_stderr\": 0.004159269440162747,\n \"f1\": 0.2507906879194633,\n \"f1_stderr\": 0.004162090421371717,\n \"acc\": 0.43807672814244847,\n \"acc_stderr\": 0.01035305451841861\n },\n \"harness|drop|3\": {\n \"em\": 0.20836828859060402,\n \"em_stderr\": 0.004159269440162747,\n \"f1\": 0.2507906879194633,\n \"f1_stderr\": 0.004162090421371717\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.11372251705837756,\n \"acc_stderr\": 0.008744810131034056\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7624309392265194,\n \"acc_stderr\": 0.011961298905803166\n }\n}\n```", "repo_url": "https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r16-gate_up_down", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|arc:challenge|25_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_29T07_06_26.845938", "path": ["**/details_harness|drop|3_2023-10-29T07-06-26.845938.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-29T07-06-26.845938.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_29T07_06_26.845938", "path": ["**/details_harness|gsm8k|5_2023-10-29T07-06-26.845938.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-29T07-06-26.845938.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hellaswag|10_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T10-01-17.783068.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T10-01-17.783068.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T10-01-17.783068.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_29T07_06_26.845938", "path": ["**/details_harness|winogrande|5_2023-10-29T07-06-26.845938.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-29T07-06-26.845938.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_10T10_01_17.783068", "path": ["results_2023-10-10T10-01-17.783068.parquet"]}, {"split": "2023_10_29T07_06_26.845938", "path": ["results_2023-10-29T07-06-26.845938.parquet"]}, {"split": "latest", "path": ["results_2023-10-29T07-06-26.845938.parquet"]}]}]}
|
2023-10-29T07:06:39+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r16-gate_up_down
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r16-gate_up_down on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-29T07:06:26.845938(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r16-gate_up_down",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r16-gate_up_down on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-29T07:06:26.845938(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r16-gate_up_down",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r16-gate_up_down on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-29T07:06:26.845938(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
37,
31,
185,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r16-gate_up_down## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r16-gate_up_down on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-29T07:06:26.845938(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
a85c4a0d75e0fe799eb0625d149ef0685e521873
|
# Dataset Card for Evaluation run of elinas/chronos007-70b
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/elinas/chronos007-70b
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [elinas/chronos007-70b](https://huggingface.co/elinas/chronos007-70b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_elinas__chronos007-70b_public",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-11-08T16:36:09.949809](https://huggingface.co/datasets/open-llm-leaderboard/details_elinas__chronos007-70b_public/blob/main/results_2023-11-08T16-36-09.949809.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.08756291946308725,
"em_stderr": 0.002894684468980241,
"f1": 0.1552086828859053,
"f1_stderr": 0.0030733731115224513,
"acc": 0.6242477589094606,
"acc_stderr": 0.012180910628722973
},
"harness|drop|3": {
"em": 0.08756291946308725,
"em_stderr": 0.002894684468980241,
"f1": 0.1552086828859053,
"f1_stderr": 0.0030733731115224513
},
"harness|gsm8k|5": {
"acc": 0.42608036391205456,
"acc_stderr": 0.013621144396086709
},
"harness|winogrande|5": {
"acc": 0.8224151539068666,
"acc_stderr": 0.010740676861359238
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_elinas__chronos007-70b
|
[
"region:us"
] |
2023-10-10T09:09:14+00:00
|
{"pretty_name": "Evaluation run of elinas/chronos007-70b", "dataset_summary": "Dataset automatically created during the evaluation run of model [elinas/chronos007-70b](https://huggingface.co/elinas/chronos007-70b) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_elinas__chronos007-70b_public\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-11-08T16:36:09.949809](https://huggingface.co/datasets/open-llm-leaderboard/details_elinas__chronos007-70b_public/blob/main/results_2023-11-08T16-36-09.949809.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.08756291946308725,\n \"em_stderr\": 0.002894684468980241,\n \"f1\": 0.1552086828859053,\n \"f1_stderr\": 0.0030733731115224513,\n \"acc\": 0.6242477589094606,\n \"acc_stderr\": 0.012180910628722973\n },\n \"harness|drop|3\": {\n \"em\": 0.08756291946308725,\n \"em_stderr\": 0.002894684468980241,\n \"f1\": 0.1552086828859053,\n \"f1_stderr\": 0.0030733731115224513\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.42608036391205456,\n \"acc_stderr\": 0.013621144396086709\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8224151539068666,\n \"acc_stderr\": 0.010740676861359238\n }\n}\n```", "repo_url": "https://huggingface.co/elinas/chronos007-70b", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_drop_3", "data_files": [{"split": "2023_11_08T16_36_09.949809", "path": ["**/details_harness|drop|3_2023-11-08T16-36-09.949809.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-11-08T16-36-09.949809.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_11_08T16_36_09.949809", "path": ["**/details_harness|gsm8k|5_2023-11-08T16-36-09.949809.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-11-08T16-36-09.949809.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_11_08T16_36_09.949809", "path": ["**/details_harness|winogrande|5_2023-11-08T16-36-09.949809.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-11-08T16-36-09.949809.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_11_08T16_36_09.949809", "path": ["results_2023-11-08T16-36-09.949809.parquet"]}, {"split": "latest", "path": ["results_2023-11-08T16-36-09.949809.parquet"]}]}]}
|
2023-12-01T14:51:05+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of elinas/chronos007-70b
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model elinas/chronos007-70b on the Open LLM Leaderboard.
The dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-11-08T16:36:09.949809(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of elinas/chronos007-70b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model elinas/chronos007-70b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-08T16:36:09.949809(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of elinas/chronos007-70b",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model elinas/chronos007-70b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-11-08T16:36:09.949809(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
17,
31,
166,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of elinas/chronos007-70b## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model elinas/chronos007-70b on the Open LLM Leaderboard.\n\nThe dataset is composed of 3 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-11-08T16:36:09.949809(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
cf2893e676f3c4cbe0af2727c15566d02d88b31b
|
# Dataset Card for "i-SAID_custom"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
giuseppemartino/i-SAID_custom
|
[
"region:us"
] |
2023-10-10T09:11:38+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": "image"}], "splits": [{"name": "train", "num_bytes": 6362576122.0, "num_examples": 840}, {"name": "validation", "num_bytes": 905977299.0, "num_examples": 99}], "download_size": 7262651438, "dataset_size": 7268553421.0}}
|
2023-10-10T14:43:47+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "i-SAID_custom"
More Information needed
|
[
"# Dataset Card for \"i-SAID_custom\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"i-SAID_custom\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"i-SAID_custom\"\n\nMore Information needed"
] |
fb5790f9f3667784741c59184a912cfdf488cbf5
|
# Dataset Card for "10kvnbook"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
tinhpx2911/vietnamese_book_10k
|
[
"region:us"
] |
2023-10-10T09:17:13+00:00
|
{"dataset_info": {"features": [{"name": "text", "dtype": "string"}, {"name": "name", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1607495469, "num_examples": 9961}], "download_size": 844824154, "dataset_size": 1607495469}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-10T09:19:35+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "10kvnbook"
More Information needed
|
[
"# Dataset Card for \"10kvnbook\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"10kvnbook\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"10kvnbook\"\n\nMore Information needed"
] |
ba9075bf4ba6d682b52684d20752f6479aa567cd
|
# Dataset Card for "French_English_2"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
ismailiismail/French_English_2
|
[
"region:us"
] |
2023-10-10T09:18:25+00:00
|
{"dataset_info": {"features": [{"name": "french", "dtype": "string"}, {"name": "english", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 914954, "num_examples": 2992}], "download_size": 352011, "dataset_size": 914954}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-12T07:17:22+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "French_English_2"
More Information needed
|
[
"# Dataset Card for \"French_English_2\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"French_English_2\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"French_English_2\"\n\nMore Information needed"
] |
05a7a43f1ccff62d553350638313ca456ce5f887
|
# Dataset Card for "c_x86_exebench_json"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
zhangshuoming/c_x86_exebench_json
|
[
"region:us"
] |
2023-10-10T09:19:10+00:00
|
{"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 1909559448, "num_examples": 868385}], "download_size": 515282168, "dataset_size": 1909559448}}
|
2023-10-11T05:55:07+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "c_x86_exebench_json"
More Information needed
|
[
"# Dataset Card for \"c_x86_exebench_json\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"c_x86_exebench_json\"\n\nMore Information needed"
] |
[
6,
21
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"c_x86_exebench_json\"\n\nMore Information needed"
] |
4a25686553cee65bcb6755ef337643841b5e712c
|
# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o_gate_up_down
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o_gate_up_down
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o_gate_up_down](https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o_gate_up_down) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r8-q_k_v_o_gate_up_down",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-25T21:40:15.944875](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r8-q_k_v_o_gate_up_down/blob/main/results_2023-10-25T21-40-15.944875.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.07697147651006711,
"em_stderr": 0.002729682408788614,
"f1": 0.12191170302013389,
"f1_stderr": 0.0028589398116221384,
"acc": 0.44546584943629414,
"acc_stderr": 0.01035635936441261
},
"harness|drop|3": {
"em": 0.07697147651006711,
"em_stderr": 0.002729682408788614,
"f1": 0.12191170302013389,
"f1_stderr": 0.0028589398116221384
},
"harness|gsm8k|5": {
"acc": 0.11902956785443518,
"acc_stderr": 0.00891970291116163
},
"harness|winogrande|5": {
"acc": 0.7719021310181531,
"acc_stderr": 0.011793015817663592
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r8-q_k_v_o_gate_up_down
|
[
"region:us"
] |
2023-10-10T09:21:06+00:00
|
{"pretty_name": "Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o_gate_up_down", "dataset_summary": "Dataset automatically created during the evaluation run of model [CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o_gate_up_down](https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o_gate_up_down) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r8-q_k_v_o_gate_up_down\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-25T21:40:15.944875](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r8-q_k_v_o_gate_up_down/blob/main/results_2023-10-25T21-40-15.944875.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.07697147651006711,\n \"em_stderr\": 0.002729682408788614,\n \"f1\": 0.12191170302013389,\n \"f1_stderr\": 0.0028589398116221384,\n \"acc\": 0.44546584943629414,\n \"acc_stderr\": 0.01035635936441261\n },\n \"harness|drop|3\": {\n \"em\": 0.07697147651006711,\n \"em_stderr\": 0.002729682408788614,\n \"f1\": 0.12191170302013389,\n \"f1_stderr\": 0.0028589398116221384\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.11902956785443518,\n \"acc_stderr\": 0.00891970291116163\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7719021310181531,\n \"acc_stderr\": 0.011793015817663592\n }\n}\n```", "repo_url": "https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o_gate_up_down", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|arc:challenge|25_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_25T21_40_15.944875", "path": ["**/details_harness|drop|3_2023-10-25T21-40-15.944875.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-25T21-40-15.944875.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_25T21_40_15.944875", "path": ["**/details_harness|gsm8k|5_2023-10-25T21-40-15.944875.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-25T21-40-15.944875.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hellaswag|10_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T10-20-42.158103.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T10-20-42.158103.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T10-20-42.158103.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_25T21_40_15.944875", "path": ["**/details_harness|winogrande|5_2023-10-25T21-40-15.944875.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-25T21-40-15.944875.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_10T10_20_42.158103", "path": ["results_2023-10-10T10-20-42.158103.parquet"]}, {"split": "2023_10_25T21_40_15.944875", "path": ["results_2023-10-25T21-40-15.944875.parquet"]}, {"split": "latest", "path": ["results_2023-10-25T21-40-15.944875.parquet"]}]}]}
|
2023-10-25T20:40:28+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o_gate_up_down
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o_gate_up_down on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-25T21:40:15.944875(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o_gate_up_down",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o_gate_up_down on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-25T21:40:15.944875(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o_gate_up_down",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o_gate_up_down on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-25T21:40:15.944875(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
45,
31,
193,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o_gate_up_down## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-q_k_v_o_gate_up_down on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-25T21:40:15.944875(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
7a63d09d281a3705779a2f44a37e0e0843b61407
|
# GENERATED USA Passports Segmentation
The dataset contains a collection of images representing **GENERATED USA Passports**. Each passport image is segmented into different zones, including the **passport zone, photo, name, surname, date of birth, sex, nationality, passport number, and MRZ (Machine Readable Zone)**.
The dataset can be utilized for *computer vision, object detection, data extraction and machine learning models*.
Generated passports can assist in conducting research without accessing or compromising real user data that is often sensitive and subject to privacy regulations. **Synthetic data generation** allows researchers to *develop and refine models using simulated passport data without risking privacy leaks*.

### The dataset is solely for informational or educational purposes and should not be used for any fraudulent or deceptive activities.
# Get the dataset
### This is just an example of the data
Leave a request on [**https://trainingdata.pro/data-market**](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=generated-passports-segmentation) to discuss your requirements, learn about the price and buy the dataset.
# Dataset structure
- **images** - contains of generated images of passports
- **labels** - includes segmentation masks created for the original images
- **annotations.xml** - contains coordinates of the polygons, created for the original photo
# Data Format
Each image from `images` folder is accompanied by an XML-annotation in the `annotations.xml` file indicating the coordinates of the polygons and labels . For each point, the x and y coordinates are provided.
### Сlasses:
- **passport**: passport zone,
- **photo**: photo of the person,
- **number**: number of the passport,
- **name**: name of the person,
- **surname**: surname of the person,
- **date_of_birth**: date of birth of the person,
- **nationality**: nationality of the person,
- **sex**: sex of the person,
- **mrz**: mrz in the passport,
- **other**: other text in the passport
# Example of XML file structure

# GENERATED USA Passports Segmentation might be made in accordance with your requirements.
## **[TrainingData](https://trainingdata.pro/data-market?utm_source=huggingface&utm_medium=cpc&utm_campaign=generated-passports-segmentation)** provides high-quality data annotation tailored to your needs
More datasets in TrainingData's Kaggle account: **https://www.kaggle.com/trainingdatapro/datasets**
TrainingData's GitHub: **https://github.com/Trainingdata-datamarket/TrainingData_All_datasets**
|
TrainingDataPro/generated-passports-segmentation
|
[
"task_categories:image-segmentation",
"language:en",
"license:cc-by-nc-nd-4.0",
"finance",
"legal",
"code",
"region:us"
] |
2023-10-10T09:21:17+00:00
|
{"language": ["en"], "license": "cc-by-nc-nd-4.0", "task_categories": ["image-segmentation"], "tags": ["finance", "legal", "code"], "dataset_info": {"features": [{"name": "id", "dtype": "uint16"}, {"name": "image", "dtype": "image"}, {"name": "mask", "dtype": "image"}, {"name": "width", "dtype": "uint16"}, {"name": "height", "dtype": "uint16"}, {"name": "shapes", "sequence": [{"name": "type", "sequence": "string"}, {"name": "points", "sequence": {"sequence": "float32"}}, {"name": "rotation", "dtype": "int32"}, {"name": "occluded", "dtype": "int32"}, {"name": "z_order", "dtype": "int32"}]}], "splits": [{"name": "train", "num_bytes": 61160716.0, "num_examples": 22}], "download_size": 60972963, "dataset_size": 61160716.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-11-08T14:53:49+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-image-segmentation #language-English #license-cc-by-nc-nd-4.0 #finance #legal #code #region-us
|
# GENERATED USA Passports Segmentation
The dataset contains a collection of images representing GENERATED USA Passports. Each passport image is segmented into different zones, including the passport zone, photo, name, surname, date of birth, sex, nationality, passport number, and MRZ (Machine Readable Zone).
The dataset can be utilized for *computer vision, object detection, data extraction and machine learning models*.
Generated passports can assist in conducting research without accessing or compromising real user data that is often sensitive and subject to privacy regulations. Synthetic data generation allows researchers to *develop and refine models using simulated passport data without risking privacy leaks*.
.\n\nThe dataset can be utilized for *computer vision, object detection, data extraction and machine learning models*.\n\nGenerated passports can assist in conducting research without accessing or compromising real user data that is often sensitive and subject to privacy regulations. Synthetic data generation allows researchers to *develop and refine models using simulated passport data without risking privacy leaks*.\n\n.\n\nThe dataset can be utilized for *computer vision, object detection, data extraction and machine learning models*.\n\nGenerated passports can assist in conducting research without accessing or compromising real user data that is often sensitive and subject to privacy regulations. Synthetic data generation allows researchers to *develop and refine models using simulated passport data without risking privacy leaks*.\n\n.\n\nThe dataset can be utilized for *computer vision, object detection, data extraction and machine learning models*.\n\nGenerated passports can assist in conducting research without accessing or compromising real user data that is often sensitive and subject to privacy regulations. Synthetic data generation allows researchers to *develop and refine models using simulated passport data without risking privacy leaks*.\n\n on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r16-q_k_v_o",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-26T00:41:33.977337](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r16-q_k_v_o/blob/main/results_2023-10-26T00-41-33.977337.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.33378775167785235,
"em_stderr": 0.004829266317241522,
"f1": 0.37629928691275216,
"f1_stderr": 0.004755605249653425,
"acc": 0.45257492790991716,
"acc_stderr": 0.010688989200801685
},
"harness|drop|3": {
"em": 0.33378775167785235,
"em_stderr": 0.004829266317241522,
"f1": 0.37629928691275216,
"f1_stderr": 0.004755605249653425
},
"harness|gsm8k|5": {
"acc": 0.1379833206974981,
"acc_stderr": 0.009499777327746827
},
"harness|winogrande|5": {
"acc": 0.7671665351223362,
"acc_stderr": 0.011878201073856544
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r16-q_k_v_o
|
[
"region:us"
] |
2023-10-10T09:27:28+00:00
|
{"pretty_name": "Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r16-q_k_v_o", "dataset_summary": "Dataset automatically created during the evaluation run of model [CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r16-q_k_v_o](https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r16-q_k_v_o) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r16-q_k_v_o\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-26T00:41:33.977337](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r16-q_k_v_o/blob/main/results_2023-10-26T00-41-33.977337.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.33378775167785235,\n \"em_stderr\": 0.004829266317241522,\n \"f1\": 0.37629928691275216,\n \"f1_stderr\": 0.004755605249653425,\n \"acc\": 0.45257492790991716,\n \"acc_stderr\": 0.010688989200801685\n },\n \"harness|drop|3\": {\n \"em\": 0.33378775167785235,\n \"em_stderr\": 0.004829266317241522,\n \"f1\": 0.37629928691275216,\n \"f1_stderr\": 0.004755605249653425\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.1379833206974981,\n \"acc_stderr\": 0.009499777327746827\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7671665351223362,\n \"acc_stderr\": 0.011878201073856544\n }\n}\n```", "repo_url": "https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r16-q_k_v_o", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|arc:challenge|25_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_26T00_41_33.977337", "path": ["**/details_harness|drop|3_2023-10-26T00-41-33.977337.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-26T00-41-33.977337.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_26T00_41_33.977337", "path": ["**/details_harness|gsm8k|5_2023-10-26T00-41-33.977337.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-26T00-41-33.977337.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hellaswag|10_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T10-27-05.033674.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T10-27-05.033674.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T10-27-05.033674.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_26T00_41_33.977337", "path": ["**/details_harness|winogrande|5_2023-10-26T00-41-33.977337.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-26T00-41-33.977337.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_10T10_27_05.033674", "path": ["results_2023-10-10T10-27-05.033674.parquet"]}, {"split": "2023_10_26T00_41_33.977337", "path": ["results_2023-10-26T00-41-33.977337.parquet"]}, {"split": "latest", "path": ["results_2023-10-26T00-41-33.977337.parquet"]}]}]}
|
2023-10-25T23:41:46+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r16-q_k_v_o
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r16-q_k_v_o on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-26T00:41:33.977337(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r16-q_k_v_o",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r16-q_k_v_o on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-26T00:41:33.977337(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r16-q_k_v_o",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r16-q_k_v_o on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-26T00:41:33.977337(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
39,
31,
187,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r16-q_k_v_o## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r16-q_k_v_o on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-26T00:41:33.977337(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
881df1e8267710aa9f280719b76808f5620f823b
|
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed]
|
fmeleard/moody_data
|
[
"task_categories:summarization",
"task_categories:conversational",
"language:fr",
"license:apache-2.0",
"region:us"
] |
2023-10-10T09:34:34+00:00
|
{"language": ["fr"], "license": "apache-2.0", "task_categories": ["summarization", "conversational"]}
|
2023-10-10T09:37:19+00:00
|
[] |
[
"fr"
] |
TAGS
#task_categories-summarization #task_categories-conversational #language-French #license-apache-2.0 #region-us
|
# Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
|
[
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
[
"TAGS\n#task_categories-summarization #task_categories-conversational #language-French #license-apache-2.0 #region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] |
[
40,
34,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] |
[
"passage: TAGS\n#task_categories-summarization #task_categories-conversational #language-French #license-apache-2.0 #region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
06e7905ff89594dc4c16cf4f5d019bd9cba3aca5
|
## Overview
This dataset is a continuation of the airoboros datasets, with the following updates:
* More MathJSON, now ~17k items - math questions, prefixed with __"Create a MathJSON solution to the following:"__, which then outputs a JSON between __`<mathjson>`__ and __`</mathjson>`__ tags, which can be parsed and passed to a deterministic library to perform calculations.
* Log information extraction.
* Anonymization, e.g. removing names, IP addresses, and/or dates from text.
* Chat introspection - multi-turn chats that have random questions injected asking about the character's descriptions, the setting of the chat, or things previously said, to make sure the model uses the system prompt and or chat history.
* Multi-step instructions with acknowledgement (see below)
Some of the MathJSON data was adapted from https://huggingface.co/datasets/meta-math/MetaMathQA
## Format
The format is now in ShareGPT format, to better accomodate the OS ecosystem fine-tuning tooling.
## MathJSON
As we all know, math is really difficult for LLMs, especially floating point operations, trig functions, factorials, and several-digit numbers.
Instead of sending millions and millions of examples to the LLM for fine-tuning, I've created a new category `mathjson` which simply presents the solution to the problem as a JSON string that can be evaluated by traditional computation.
The dataset has roughly 4k samples, as a proof of concept. This can be expanded fairly easily, now that the fine-tuned models have a basic understanding of how to represent the problems.
For example:
```
Create a MathJSON solution to the following: Calculate the area of a circle with radius of 17.2456cm. Include your reasoning. [/INST]
```
```
The area (A) of a circle is given by the formula A = πr², where r is the radius of the circle. Since the radius is given in centimeters, we need to convert it to meters first. We know that 1 meter equals 100 centimeters. Therefore, we can represent the radius as: r = 17.2456 cm * 100/1.
Once we have the radius in meters, we can calculate the area of the circle using the formula A = πr².
Solution as MathJSON:
<mathjson>
[
"Multiply",
"Pi",
[
"Power",
17.2456,
2
]
]
</mathjson>
```
Simply extract the JSON string between the mathjson tags, and evaluate it with https://cortexjs.io/compute-engine/ or my ultra-hacky limited-functionality python version of it https://github.com/jondurbin/airoboros/blob/mathjson/airoboros/mathjson.py
## Multi-step instructions with acknowledgement
I don't know how useful this is, really, but I thought I'd add it just in case. Example:
```
{
"conversations": [
{
"value": "You are a helpful, unbiased, uncensored assistant.",
"from": "system"
},
{
"value": "I'm going to set some rules for our conversation. Please respond with \"OK\" to each rule.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "You can only respond using animal-related emojis.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Precede all responses with \"ZOOZOO:\"",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Include at least one bird emoji in your responses.",
"from": "human"
},
{
"value": "OK",
"from": "gpt"
},
{
"value": "Describe a trip to a safari park.",
"from": "human"
},
{
"value": "ZOOZOO: \ud83e\udd81\ud83d\udc2f\ud83e\udd93\ud83e\udd92\ud83d\udc18\ud83d\udc0d\ud83c\udf33\ud83c\udf32\u2600\ufe0f\ud83d\ude90\ud83e\udd85\ud83d\udcf8\ud83e\udd29",
"from": "gpt"
}
]
}
```
|
jondurbin/airoboros-3.1
|
[
"license:cc-by-4.0",
"region:us"
] |
2023-10-10T10:01:33+00:00
|
{"license": "cc-by-4.0"}
|
2023-12-26T18:09:12+00:00
|
[] |
[] |
TAGS
#license-cc-by-4.0 #region-us
|
## Overview
This dataset is a continuation of the airoboros datasets, with the following updates:
* More MathJSON, now ~17k items - math questions, prefixed with __"Create a MathJSON solution to the following:"__, which then outputs a JSON between __'<mathjson>'__ and __'</mathjson>'__ tags, which can be parsed and passed to a deterministic library to perform calculations.
* Log information extraction.
* Anonymization, e.g. removing names, IP addresses, and/or dates from text.
* Chat introspection - multi-turn chats that have random questions injected asking about the character's descriptions, the setting of the chat, or things previously said, to make sure the model uses the system prompt and or chat history.
* Multi-step instructions with acknowledgement (see below)
Some of the MathJSON data was adapted from URL
## Format
The format is now in ShareGPT format, to better accomodate the OS ecosystem fine-tuning tooling.
## MathJSON
As we all know, math is really difficult for LLMs, especially floating point operations, trig functions, factorials, and several-digit numbers.
Instead of sending millions and millions of examples to the LLM for fine-tuning, I've created a new category 'mathjson' which simply presents the solution to the problem as a JSON string that can be evaluated by traditional computation.
The dataset has roughly 4k samples, as a proof of concept. This can be expanded fairly easily, now that the fine-tuned models have a basic understanding of how to represent the problems.
For example:
Simply extract the JSON string between the mathjson tags, and evaluate it with URL or my ultra-hacky limited-functionality python version of it URL
## Multi-step instructions with acknowledgement
I don't know how useful this is, really, but I thought I'd add it just in case. Example:
|
[
"## Overview\n\nThis dataset is a continuation of the airoboros datasets, with the following updates:\n* More MathJSON, now ~17k items - math questions, prefixed with __\"Create a MathJSON solution to the following:\"__, which then outputs a JSON between __'<mathjson>'__ and __'</mathjson>'__ tags, which can be parsed and passed to a deterministic library to perform calculations.\n* Log information extraction.\n* Anonymization, e.g. removing names, IP addresses, and/or dates from text.\n* Chat introspection - multi-turn chats that have random questions injected asking about the character's descriptions, the setting of the chat, or things previously said, to make sure the model uses the system prompt and or chat history.\n* Multi-step instructions with acknowledgement (see below)\n\nSome of the MathJSON data was adapted from URL",
"## Format\n\nThe format is now in ShareGPT format, to better accomodate the OS ecosystem fine-tuning tooling.",
"## MathJSON\n\nAs we all know, math is really difficult for LLMs, especially floating point operations, trig functions, factorials, and several-digit numbers.\n\nInstead of sending millions and millions of examples to the LLM for fine-tuning, I've created a new category 'mathjson' which simply presents the solution to the problem as a JSON string that can be evaluated by traditional computation.\n\nThe dataset has roughly 4k samples, as a proof of concept. This can be expanded fairly easily, now that the fine-tuned models have a basic understanding of how to represent the problems.\n\nFor example:\n\n\n\n\n\nSimply extract the JSON string between the mathjson tags, and evaluate it with URL or my ultra-hacky limited-functionality python version of it URL",
"## Multi-step instructions with acknowledgement\n\nI don't know how useful this is, really, but I thought I'd add it just in case. Example:"
] |
[
"TAGS\n#license-cc-by-4.0 #region-us \n",
"## Overview\n\nThis dataset is a continuation of the airoboros datasets, with the following updates:\n* More MathJSON, now ~17k items - math questions, prefixed with __\"Create a MathJSON solution to the following:\"__, which then outputs a JSON between __'<mathjson>'__ and __'</mathjson>'__ tags, which can be parsed and passed to a deterministic library to perform calculations.\n* Log information extraction.\n* Anonymization, e.g. removing names, IP addresses, and/or dates from text.\n* Chat introspection - multi-turn chats that have random questions injected asking about the character's descriptions, the setting of the chat, or things previously said, to make sure the model uses the system prompt and or chat history.\n* Multi-step instructions with acknowledgement (see below)\n\nSome of the MathJSON data was adapted from URL",
"## Format\n\nThe format is now in ShareGPT format, to better accomodate the OS ecosystem fine-tuning tooling.",
"## MathJSON\n\nAs we all know, math is really difficult for LLMs, especially floating point operations, trig functions, factorials, and several-digit numbers.\n\nInstead of sending millions and millions of examples to the LLM for fine-tuning, I've created a new category 'mathjson' which simply presents the solution to the problem as a JSON string that can be evaluated by traditional computation.\n\nThe dataset has roughly 4k samples, as a proof of concept. This can be expanded fairly easily, now that the fine-tuned models have a basic understanding of how to represent the problems.\n\nFor example:\n\n\n\n\n\nSimply extract the JSON string between the mathjson tags, and evaluate it with URL or my ultra-hacky limited-functionality python version of it URL",
"## Multi-step instructions with acknowledgement\n\nI don't know how useful this is, really, but I thought I'd add it just in case. Example:"
] |
[
15,
211,
28,
180,
35
] |
[
"passage: TAGS\n#license-cc-by-4.0 #region-us \n## Overview\n\nThis dataset is a continuation of the airoboros datasets, with the following updates:\n* More MathJSON, now ~17k items - math questions, prefixed with __\"Create a MathJSON solution to the following:\"__, which then outputs a JSON between __'<mathjson>'__ and __'</mathjson>'__ tags, which can be parsed and passed to a deterministic library to perform calculations.\n* Log information extraction.\n* Anonymization, e.g. removing names, IP addresses, and/or dates from text.\n* Chat introspection - multi-turn chats that have random questions injected asking about the character's descriptions, the setting of the chat, or things previously said, to make sure the model uses the system prompt and or chat history.\n* Multi-step instructions with acknowledgement (see below)\n\nSome of the MathJSON data was adapted from URL## Format\n\nThe format is now in ShareGPT format, to better accomodate the OS ecosystem fine-tuning tooling.## MathJSON\n\nAs we all know, math is really difficult for LLMs, especially floating point operations, trig functions, factorials, and several-digit numbers.\n\nInstead of sending millions and millions of examples to the LLM for fine-tuning, I've created a new category 'mathjson' which simply presents the solution to the problem as a JSON string that can be evaluated by traditional computation.\n\nThe dataset has roughly 4k samples, as a proof of concept. This can be expanded fairly easily, now that the fine-tuned models have a basic understanding of how to represent the problems.\n\nFor example:\n\n\n\n\n\nSimply extract the JSON string between the mathjson tags, and evaluate it with URL or my ultra-hacky limited-functionality python version of it URL## Multi-step instructions with acknowledgement\n\nI don't know how useful this is, really, but I thought I'd add it just in case. Example:"
] |
df6d6d348d7f8a0b1c98b56f4443977b60fae0f5
|
# IndoMMLU
<!---
[](https://github.com/internLM/OpenCompass/) [](https://github.com/EleutherAI/lm-evaluation-harness)
-->
<p align="center"> <img src="https://raw.githubusercontent.com/fajri91/eval_picts/master/IndoMMLU-Bar.png" style="width: 100%;" id="title-icon">
</p>
<p align="center"> <a href="http://www.fajrikoto.com" target="_blank">Fajri Koto</a>, <a href="https://www.linkedin.com/in/nuaisyah/" target="_blank">Nurul Aisyah</a>, <a href="https://haonan-li.github.io/" target="_blank">Haonan Li</a>, <a href="https://people.eng.unimelb.edu.au/tbaldwin/" target="_blank">Timothy Baldwin</a> </p>
<h4 align="center">
<p align="center" style="display: flex; flex-direction: row; justify-content: center; align-items: center">
📄 <a href="https://arxiv.org/abs/2310.04928" target="_blank" style="margin-right: 15px; margin-left: 10px">Paper</a> •
🏆 <a href="https://github.com/fajri91/IndoMMLU/blob/main/README_EN.md#evaluation" target="_blank" style="margin-left: 10px">Leaderboard</a> •
🤗 <a href="https://huggingface.co/datasets/indolem/indommlu" target="_blank" style="margin-left: 10px">Dataset</a>
</p>
</h4>
## Introduction
We introduce IndoMMLU, the first multi-task language understanding benchmark for Indonesian culture and languages,
which consists of questions from primary school to university entrance exams in Indonesia. By employing professional teachers,
we obtain 14,906 questions across 63 tasks and education levels, with 46\% of the questions focusing on assessing proficiency
in the Indonesian language and knowledge of nine local languages and cultures in Indonesia.
<p align="left"> <img src="https://github.com/fajri91/eval_picts/blob/master/IndoMMLU-dist.png?raw=true" style="width: 500px;" id="title-icon"> </p>
## Subjects
| Level | Subjects |
|-----------|------------------------------------|
| SD (Primary School) | Science, Social science, Civics, Indonesian Language, Balinese, Makassarese, Banjarese, Lampungic, Madurese, Sundanese, Javanese, Dayak Ngaju, Minangkabau culture, Art, Sports, Islam religion, Christian religion, Hindu religion |
| SMP (Junior High School) | Science, Social science, Civics, Indonesian Language, Balinese, Makassarese, Banjarese, Lampungic, Madurese, Sundanese, Javanese, Minangkabau culture, Art, Sports, Islam religion, Christian religion, Hindu religion |
| SMA (Senior High School) | Physics, Chemistry, Biology, Geography, Sociology, Economics, History, Civics, Indonesian Language, Balinese, Makassarese, Banjarese, Lampungic, Madurese, Sundanese, Javanese, Art, Sports, Islam religion, Christian religion, Hindu religion |
University Entrance Test | Chemistry, Biology, Geography, Sociology, Economics, History, Indonesian Language |
We categorize the collected questions into different subject areas, including: (1) STEM (Science, Technology, Engineering, and Mathematics); (2) Social Science; (3) Humanities; (4) Indonesian Language; and (5) Local Languages and Cultures.
## Examples
These questions are written in Indonesian. For local language subjects, some are written in the local languages. The English version is for illustrative purposes only.
<p align="left">
<img src="https://github.com/fajri91/eval_picts/blob/master/min_example.png?raw=true" style="width: 400px;" id="title-icon">
</p>
## Evaluation
We evaluate 24 multilingual LLMs of different sizes in zero-shot and few-shot settings. This includes [GPT-3.5 (ChatGPT)](https://chat.openai.com/), [XGLM](https://arxiv.org/abs/2112.10668), [Falcon](https://falconllm.tii.ae/), [BLOOMZ](https://huggingface.co/bigscience/bloomz), [mT0](https://huggingface.co/bigscience/bloomz), [LLaMA](https://arxiv.org/abs/2302.13971), and [Bactrian-X](https://github.com/mbzuai-nlp/bactrian-x). Prior to the question and multiple-choice options, we add a simple prompt in the Indonesian language:
```
Ini adalah soal [subject] untuk [level]. Pilihlah salah satu jawaban yang dianggap benar!
English Translation: This is a [subject] question for [level]. Please choose the correct answer!
```
#### Zero-shot Evaluation
| Model (#param) | STEM | Social Science | Humanities | Indonesian Lang. | Local L. Culture | Average |
|---------------------|------|----------|-------------|---------|----------|---------|
| Random | 21.9 | 23.4 | 23.5 | 24.4 | 26.6 | 24.4 |
| [GPT-3.5 (175B)](https://chat.openai.com/) | **54.3** | **62.5** | **64.0** | **62.2** | 39.3 | **53.2** |
| [XGLM (564M)](https://huggingface.co/facebook/xglm-564M) | 22.1 | 23.0 | 25.6 | 25.6 | 27.5 | 25.2 |
| [XGLM (1.7B)](https://huggingface.co/facebook/xglm-1.7B) | 20.9 | 23.0 | 24.6 | 24.8 | 26.6 | 24.4 |
| [XGLM (2.9B)](https://huggingface.co/facebook/xglm-2.9B) | 22.9 | 23.2 | 25.4 | 26.3 | 27.2 | 25.2 |
| [XGLM (4.5B)](https://huggingface.co/facebook/xglm-4.5B) | 21.8 | 23.1 | 25.6 | 25.8 | 27.1 | 25.0 |
| [XGLM (7.5B)](https://huggingface.co/facebook/xglm-7.5B) | 22.7 | 21.7 | 23.6 | 24.5 | 27.5 | 24.5 |
| [Falcon (7B)](https://huggingface.co/tiiuae/falcon-7b) | 22.1 | 22.9 | 25.5 | 25.7 | 27.5 | 25.1 |
| [Falcon (40B)](https://huggingface.co/tiiuae/falcon-40b) | 30.2 | 34.8 | 34.8 | 34.9 | 29.2 | 32.1 |
| [BLOOMZ (560M)](https://huggingface.co/bigscience/bloomz-560m) | 22.9 | 23.6 | 23.2 | 24.2 | 25.1 | 24.0 |
| [BLOOMZ (1.1B)](https://huggingface.co/bigscience/bloomz-1b1) | 20.4 | 21.4 | 21.1 | 23.5 | 24.7 | 22.4 |
| [BLOOMZ (1.7B)](https://huggingface.co/bigscience/bloomz-1b7) | 31.5 | 39.3 | 38.3 | 42.8 | 29.4 | 34.4 |
| [BLOOMZ (3B)](https://huggingface.co/bigscience/bloomz-3b) | 33.5 | 44.5 | 39.7 | 46.7 | 29.8 | 36.4 |
| [BLOOMZ (7.1B)](https://huggingface.co/bigscience/bloomz-7b1) | 37.1 | 46.7 | 44.0 | 49.1 | 28.2 | 38.0 |
| [mT0<sub>small</sub> (300M)](https://huggingface.co/bigscience/mt0-small) | 21.8 | 21.4 | 25.7 | 25.1 | 27.6 | 24.9 |
| [mT0<sub>base</sub> (580M)](https://huggingface.co/bigscience/mt0-base) | 22.6 | 22.6 | 25.7 | 25.6 | 26.9 | 25.0 |
| [mT0<sub>large</sub> (1.2B)](https://huggingface.co/bigscience/mt0-large) | 22.0 | 23.4 | 25.1 | 27.3 | 27.6 | 25.2 |
| [mT0<sub>xl</sub> (3.7B)](https://huggingface.co/bigscience/mt0-xl) | 31.4 | 42.9 | 41.0 | 47.8 | 35.7 | 38.2 |
| [mT0<sub>xxl</sub> (13B)](https://huggingface.co/bigscience/mt0-xxl) | 33.5 | 46.2 | 47.9 | 52.6 | **39.6** | 42.5 |
| [LLaMA (7B)](https://arxiv.org/abs/2302.13971) | 22.8 | 23.1 | 25.1 | 26.7 | 27.6 | 25.3 |
| [LLaMA (13B)](https://arxiv.org/abs/2302.13971) | 24.1 | 23.0 | 24.4 | 29.5 | 26.7 | 25.3 |
| [LLaMA (30B)](https://arxiv.org/abs/2302.13971) | 25.4 | 23.5 | 25.9 | 28.4 | 28.7 | 26.5 |
| [LLaMA (65B)](https://arxiv.org/abs/2302.13971) | 33.0 | 37.7 | 40.8 | 41.4 | 32.1 | 35.8 |
| [Bactrian-X-LLaMA (7B)](https://github.com/mbzuai-nlp/bactrian-x) | 23.3 | 24.0 | 26.0 | 26.1 | 27.5 | 25.7 |
| [Bactrian-X-LLaMA (13B)](https://github.com/mbzuai-nlp/bactrian-x) | 28.3 | 29.9 | 32.8 | 35.2 | 29.2 | 30.3 |
#### GPT-3.5 performance (% accuracy) across different education levels
<p align="left">
<img src="https://github.com/fajri91/eval_picts/blob/master/IndoMMLU-result.png?raw=true" style="width: 370px;" id="title-icon">
</p>
Red indicates that the score is below the minimum passing threshold of 65, while green signifies a score at or above this minimum. We can observe that ChatGPT mostly passes a score of 65 in Indonesian primary school exams.
#### Few-shot Evaluation
<p align="left">
<img src="https://github.com/fajri91/eval_picts/blob/master/plot_fewshot.png?raw=true" style="width: 380px;" id="title-icon">
</p>
## Data
Each question in the dataset is a multiple-choice question with up to 5 choices and only one choice as the correct answer.
We provide our dataset according to each subject in [data](data) folder. You can also access our dataset via [Hugging Face](https://huggingface.co/datasets/indolem/indommlu).
<!--
#### Quick Use
Our dataset has been added to [lm-evaluation-harness](https://github.com/EleutherAI/lm-evaluation-harness) and [OpenCompass](https://github.com/InternLM/opencompass), you can evaluate your model via these open-source tools.
-->
#### Evaluation
The code for the evaluation of each model we used is in `evaluate.py`, and the code to run them is listed in `run.sh`.
## Citation
```
@inproceedings{koto-etal-2023-indommlu,
title = "Large Language Models Only Pass Primary School Exams in {I}ndonesia: A Comprehensive Test on {I}ndo{MMLU}",
author = "Fajri Koto and Nurul Aisyah and Haonan Li and Timothy Baldwin",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing (EMNLP)",
month = December,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
}
```
## License
The IndoMMLU dataset is licensed under a
[Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](http://creativecommons.org/licenses/by-nc-sa/4.0/).
|
indolem/IndoMMLU
|
[
"task_categories:question-answering",
"size_categories:10K<n<100K",
"language:id",
"license:mit",
"knowledge",
"arxiv:2310.04928",
"arxiv:2112.10668",
"arxiv:2302.13971",
"region:us"
] |
2023-10-10T10:16:12+00:00
|
{"language": ["id"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["question-answering"], "pretty_name": "IndoMMLU", "tags": ["knowledge"]}
|
2023-10-11T03:30:54+00:00
|
[
"2310.04928",
"2112.10668",
"2302.13971"
] |
[
"id"
] |
TAGS
#task_categories-question-answering #size_categories-10K<n<100K #language-Indonesian #license-mit #knowledge #arxiv-2310.04928 #arxiv-2112.10668 #arxiv-2302.13971 #region-us
|
IndoMMLU
========

[Fajri Koto](URL), [Nurul Aisyah](URL target=), [Haonan Li](URL target=), [Timothy Baldwin](URL target=)
####
[Paper](URL target=) •
[Leaderboard](URL target=) •
[Dataset](URL target=)
Introduction
------------
We introduce IndoMMLU, the first multi-task language understanding benchmark for Indonesian culture and languages,
which consists of questions from primary school to university entrance exams in Indonesia. By employing professional teachers,
we obtain 14,906 questions across 63 tasks and education levels, with 46% of the questions focusing on assessing proficiency
in the Indonesian language and knowledge of nine local languages and cultures in Indonesia.

Subjects
--------
We categorize the collected questions into different subject areas, including: (1) STEM (Science, Technology, Engineering, and Mathematics); (2) Social Science; (3) Humanities; (4) Indonesian Language; and (5) Local Languages and Cultures.
Examples
--------
These questions are written in Indonesian. For local language subjects, some are written in the local languages. The English version is for illustrative purposes only.

Evaluation
----------
We evaluate 24 multilingual LLMs of different sizes in zero-shot and few-shot settings. This includes GPT-3.5 (ChatGPT), XGLM, Falcon, BLOOMZ, mT0, LLaMA, and Bactrian-X. Prior to the question and multiple-choice options, we add a simple prompt in the Indonesian language:
#### Zero-shot Evaluation
#### GPT-3.5 performance (% accuracy) across different education levels

Red indicates that the score is below the minimum passing threshold of 65, while green signifies a score at or above this minimum. We can observe that ChatGPT mostly passes a score of 65 in Indonesian primary school exams.
#### Few-shot Evaluation

Data
----
Each question in the dataset is a multiple-choice question with up to 5 choices and only one choice as the correct answer.
We provide our dataset according to each subject in data folder. You can also access our dataset via Hugging Face.
#### Evaluation
The code for the evaluation of each model we used is in 'URL', and the code to run them is listed in 'URL'.
License
-------
The IndoMMLU dataset is licensed under a
Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License.
|
[
"#### \n\n[Paper](URL target=) • \n [Leaderboard](URL target=) •\n [Dataset](URL target=)\n\n\nIntroduction\n------------\n\n\nWe introduce IndoMMLU, the first multi-task language understanding benchmark for Indonesian culture and languages,\nwhich consists of questions from primary school to university entrance exams in Indonesia. By employing professional teachers,\nwe obtain 14,906 questions across 63 tasks and education levels, with 46% of the questions focusing on assessing proficiency\nin the Indonesian language and knowledge of nine local languages and cultures in Indonesia.\n\n\n  \n\n\nSubjects\n--------\n\n\n\nWe categorize the collected questions into different subject areas, including: (1) STEM (Science, Technology, Engineering, and Mathematics); (2) Social Science; (3) Humanities; (4) Indonesian Language; and (5) Local Languages and Cultures.\n\n\nExamples\n--------\n\n\nThese questions are written in Indonesian. For local language subjects, some are written in the local languages. The English version is for illustrative purposes only.\n\n\n\n\n\n\n\nEvaluation\n----------\n\n\nWe evaluate 24 multilingual LLMs of different sizes in zero-shot and few-shot settings. This includes GPT-3.5 (ChatGPT), XGLM, Falcon, BLOOMZ, mT0, LLaMA, and Bactrian-X. Prior to the question and multiple-choice options, we add a simple prompt in the Indonesian language:",
"#### Zero-shot Evaluation",
"#### GPT-3.5 performance (% accuracy) across different education levels\n\n\n\n\n\n\n\nRed indicates that the score is below the minimum passing threshold of 65, while green signifies a score at or above this minimum. We can observe that ChatGPT mostly passes a score of 65 in Indonesian primary school exams.",
"#### Few-shot Evaluation\n\n\n\n\n\n\n\nData\n----\n\n\nEach question in the dataset is a multiple-choice question with up to 5 choices and only one choice as the correct answer.\nWe provide our dataset according to each subject in data folder. You can also access our dataset via Hugging Face.",
"#### Evaluation\n\n\nThe code for the evaluation of each model we used is in 'URL', and the code to run them is listed in 'URL'.\n\n\nLicense\n-------\n\n\nThe IndoMMLU dataset is licensed under a\nCreative Commons Attribution-NonCommercial-ShareAlike 4.0 International License."
] |
[
"TAGS\n#task_categories-question-answering #size_categories-10K<n<100K #language-Indonesian #license-mit #knowledge #arxiv-2310.04928 #arxiv-2112.10668 #arxiv-2302.13971 #region-us \n",
"#### \n\n[Paper](URL target=) • \n [Leaderboard](URL target=) •\n [Dataset](URL target=)\n\n\nIntroduction\n------------\n\n\nWe introduce IndoMMLU, the first multi-task language understanding benchmark for Indonesian culture and languages,\nwhich consists of questions from primary school to university entrance exams in Indonesia. By employing professional teachers,\nwe obtain 14,906 questions across 63 tasks and education levels, with 46% of the questions focusing on assessing proficiency\nin the Indonesian language and knowledge of nine local languages and cultures in Indonesia.\n\n\n  \n\n\nSubjects\n--------\n\n\n\nWe categorize the collected questions into different subject areas, including: (1) STEM (Science, Technology, Engineering, and Mathematics); (2) Social Science; (3) Humanities; (4) Indonesian Language; and (5) Local Languages and Cultures.\n\n\nExamples\n--------\n\n\nThese questions are written in Indonesian. For local language subjects, some are written in the local languages. The English version is for illustrative purposes only.\n\n\n\n\n\n\n\nEvaluation\n----------\n\n\nWe evaluate 24 multilingual LLMs of different sizes in zero-shot and few-shot settings. This includes GPT-3.5 (ChatGPT), XGLM, Falcon, BLOOMZ, mT0, LLaMA, and Bactrian-X. Prior to the question and multiple-choice options, we add a simple prompt in the Indonesian language:",
"#### Zero-shot Evaluation",
"#### GPT-3.5 performance (% accuracy) across different education levels\n\n\n\n\n\n\n\nRed indicates that the score is below the minimum passing threshold of 65, while green signifies a score at or above this minimum. We can observe that ChatGPT mostly passes a score of 65 in Indonesian primary school exams.",
"#### Few-shot Evaluation\n\n\n\n\n\n\n\nData\n----\n\n\nEach question in the dataset is a multiple-choice question with up to 5 choices and only one choice as the correct answer.\nWe provide our dataset according to each subject in data folder. You can also access our dataset via Hugging Face.",
"#### Evaluation\n\n\nThe code for the evaluation of each model we used is in 'URL', and the code to run them is listed in 'URL'.\n\n\nLicense\n-------\n\n\nThe IndoMMLU dataset is licensed under a\nCreative Commons Attribution-NonCommercial-ShareAlike 4.0 International License."
] |
[
69,
324,
7,
77,
71,
61
] |
[
"passage: TAGS\n#task_categories-question-answering #size_categories-10K<n<100K #language-Indonesian #license-mit #knowledge #arxiv-2310.04928 #arxiv-2112.10668 #arxiv-2302.13971 #region-us \n#### \n\n[Paper](URL target=) • \n [Leaderboard](URL target=) •\n [Dataset](URL target=)\n\n\nIntroduction\n------------\n\n\nWe introduce IndoMMLU, the first multi-task language understanding benchmark for Indonesian culture and languages,\nwhich consists of questions from primary school to university entrance exams in Indonesia. By employing professional teachers,\nwe obtain 14,906 questions across 63 tasks and education levels, with 46% of the questions focusing on assessing proficiency\nin the Indonesian language and knowledge of nine local languages and cultures in Indonesia.\n\n\n  \n\n\nSubjects\n--------\n\n\n\nWe categorize the collected questions into different subject areas, including: (1) STEM (Science, Technology, Engineering, and Mathematics); (2) Social Science; (3) Humanities; (4) Indonesian Language; and (5) Local Languages and Cultures.\n\n\nExamples\n--------\n\n\nThese questions are written in Indonesian. For local language subjects, some are written in the local languages. The English version is for illustrative purposes only.\n\n\n\n\n\n\n\nEvaluation\n----------\n\n\nWe evaluate 24 multilingual LLMs of different sizes in zero-shot and few-shot settings. This includes GPT-3.5 (ChatGPT), XGLM, Falcon, BLOOMZ, mT0, LLaMA, and Bactrian-X. Prior to the question and multiple-choice options, we add a simple prompt in the Indonesian language:#### Zero-shot Evaluation#### GPT-3.5 performance (% accuracy) across different education levels\n\n\n\n\n\n\n\nRed indicates that the score is below the minimum passing threshold of 65, while green signifies a score at or above this minimum. We can observe that ChatGPT mostly passes a score of 65 in Indonesian primary school exams."
] |
43922c85f89c232428c1c02760087e8ebb2966da
|
# Dataset Card for Evaluation run of mncai/Mistral-7B-OpenOrca-1k
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/mncai/Mistral-7B-OpenOrca-1k
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [mncai/Mistral-7B-OpenOrca-1k](https://huggingface.co/mncai/Mistral-7B-OpenOrca-1k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_mncai__Mistral-7B-OpenOrca-1k",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-25T07:41:12.101153](https://huggingface.co/datasets/open-llm-leaderboard/details_mncai__Mistral-7B-OpenOrca-1k/blob/main/results_2023-10-25T07-41-12.101153.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0053481543624161075,
"em_stderr": 0.0007469252903319289,
"f1": 0.09739828020134218,
"f1_stderr": 0.001857285751420582,
"acc": 0.45294831833688076,
"acc_stderr": 0.01023434017882167
},
"harness|drop|3": {
"em": 0.0053481543624161075,
"em_stderr": 0.0007469252903319289,
"f1": 0.09739828020134218,
"f1_stderr": 0.001857285751420582
},
"harness|gsm8k|5": {
"acc": 0.1197877179681577,
"acc_stderr": 0.008944213403553095
},
"harness|winogrande|5": {
"acc": 0.7861089187056038,
"acc_stderr": 0.011524466954090247
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_mncai__Mistral-7B-OpenOrca-1k
|
[
"region:us"
] |
2023-10-10T10:19:36+00:00
|
{"pretty_name": "Evaluation run of mncai/Mistral-7B-OpenOrca-1k", "dataset_summary": "Dataset automatically created during the evaluation run of model [mncai/Mistral-7B-OpenOrca-1k](https://huggingface.co/mncai/Mistral-7B-OpenOrca-1k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_mncai__Mistral-7B-OpenOrca-1k\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-25T07:41:12.101153](https://huggingface.co/datasets/open-llm-leaderboard/details_mncai__Mistral-7B-OpenOrca-1k/blob/main/results_2023-10-25T07-41-12.101153.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0053481543624161075,\n \"em_stderr\": 0.0007469252903319289,\n \"f1\": 0.09739828020134218,\n \"f1_stderr\": 0.001857285751420582,\n \"acc\": 0.45294831833688076,\n \"acc_stderr\": 0.01023434017882167\n },\n \"harness|drop|3\": {\n \"em\": 0.0053481543624161075,\n \"em_stderr\": 0.0007469252903319289,\n \"f1\": 0.09739828020134218,\n \"f1_stderr\": 0.001857285751420582\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.1197877179681577,\n \"acc_stderr\": 0.008944213403553095\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7861089187056038,\n \"acc_stderr\": 0.011524466954090247\n }\n}\n```", "repo_url": "https://huggingface.co/mncai/Mistral-7B-OpenOrca-1k", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|arc:challenge|25_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_25T07_41_12.101153", "path": ["**/details_harness|drop|3_2023-10-25T07-41-12.101153.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-25T07-41-12.101153.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_25T07_41_12.101153", "path": ["**/details_harness|gsm8k|5_2023-10-25T07-41-12.101153.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-25T07-41-12.101153.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hellaswag|10_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T11-19-13.410150.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T11-19-13.410150.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T11-19-13.410150.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_25T07_41_12.101153", "path": ["**/details_harness|winogrande|5_2023-10-25T07-41-12.101153.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-25T07-41-12.101153.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_10T11_19_13.410150", "path": ["results_2023-10-10T11-19-13.410150.parquet"]}, {"split": "2023_10_25T07_41_12.101153", "path": ["results_2023-10-25T07-41-12.101153.parquet"]}, {"split": "latest", "path": ["results_2023-10-25T07-41-12.101153.parquet"]}]}]}
|
2023-10-25T06:41:24+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of mncai/Mistral-7B-OpenOrca-1k
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model mncai/Mistral-7B-OpenOrca-1k on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-25T07:41:12.101153(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of mncai/Mistral-7B-OpenOrca-1k",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model mncai/Mistral-7B-OpenOrca-1k on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-25T07:41:12.101153(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of mncai/Mistral-7B-OpenOrca-1k",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model mncai/Mistral-7B-OpenOrca-1k on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-25T07:41:12.101153(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
23,
31,
171,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of mncai/Mistral-7B-OpenOrca-1k## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model mncai/Mistral-7B-OpenOrca-1k on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-25T07:41:12.101153(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
6e8a260108a15c386f482f56b343298c887e9b1c
|
# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-gate_up_down
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-gate_up_down
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-gate_up_down](https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-gate_up_down) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r8-gate_up_down",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-28T04:49:22.682759](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r8-gate_up_down/blob/main/results_2023-10-28T04-49-22.682759.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.07487416107382551,
"em_stderr": 0.0026952933607895307,
"f1": 0.12380872483221442,
"f1_stderr": 0.00285148758396334,
"acc": 0.44832731261215925,
"acc_stderr": 0.010657041987495935
},
"harness|drop|3": {
"em": 0.07487416107382551,
"em_stderr": 0.0026952933607895307,
"f1": 0.12380872483221442,
"f1_stderr": 0.00285148758396334
},
"harness|gsm8k|5": {
"acc": 0.133434420015163,
"acc_stderr": 0.00936649160978448
},
"harness|winogrande|5": {
"acc": 0.7632202052091555,
"acc_stderr": 0.01194759236520739
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r8-gate_up_down
|
[
"region:us"
] |
2023-10-10T10:25:25+00:00
|
{"pretty_name": "Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-gate_up_down", "dataset_summary": "Dataset automatically created during the evaluation run of model [CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-gate_up_down](https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-gate_up_down) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r8-gate_up_down\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-28T04:49:22.682759](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r8-gate_up_down/blob/main/results_2023-10-28T04-49-22.682759.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.07487416107382551,\n \"em_stderr\": 0.0026952933607895307,\n \"f1\": 0.12380872483221442,\n \"f1_stderr\": 0.00285148758396334,\n \"acc\": 0.44832731261215925,\n \"acc_stderr\": 0.010657041987495935\n },\n \"harness|drop|3\": {\n \"em\": 0.07487416107382551,\n \"em_stderr\": 0.0026952933607895307,\n \"f1\": 0.12380872483221442,\n \"f1_stderr\": 0.00285148758396334\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.133434420015163,\n \"acc_stderr\": 0.00936649160978448\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7632202052091555,\n \"acc_stderr\": 0.01194759236520739\n }\n}\n```", "repo_url": "https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-gate_up_down", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|arc:challenge|25_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_28T04_49_22.682759", "path": ["**/details_harness|drop|3_2023-10-28T04-49-22.682759.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-28T04-49-22.682759.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_28T04_49_22.682759", "path": ["**/details_harness|gsm8k|5_2023-10-28T04-49-22.682759.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-28T04-49-22.682759.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hellaswag|10_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T11-25-01.199069.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T11-25-01.199069.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T11-25-01.199069.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_28T04_49_22.682759", "path": ["**/details_harness|winogrande|5_2023-10-28T04-49-22.682759.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-28T04-49-22.682759.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_10T11_25_01.199069", "path": ["results_2023-10-10T11-25-01.199069.parquet"]}, {"split": "2023_10_28T04_49_22.682759", "path": ["results_2023-10-28T04-49-22.682759.parquet"]}, {"split": "latest", "path": ["results_2023-10-28T04-49-22.682759.parquet"]}]}]}
|
2023-10-28T03:49:35+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-gate_up_down
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-gate_up_down on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-28T04:49:22.682759(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-gate_up_down",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-gate_up_down on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-28T04:49:22.682759(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-gate_up_down",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-gate_up_down on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-28T04:49:22.682759(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
37,
31,
185,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-gate_up_down## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r8-gate_up_down on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-28T04:49:22.682759(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
7396a595866647f61cc455b6a7e712b519f81b01
|
# Dataset Card for Evaluation run of mncai/Mistral-7B-openplatypus-1k
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/mncai/Mistral-7B-openplatypus-1k
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [mncai/Mistral-7B-openplatypus-1k](https://huggingface.co/mncai/Mistral-7B-openplatypus-1k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_mncai__Mistral-7B-openplatypus-1k",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-27T04:31:44.728538](https://huggingface.co/datasets/open-llm-leaderboard/details_mncai__Mistral-7B-openplatypus-1k/blob/main/results_2023-10-27T04-31-44.728538.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.0019924496644295304,
"em_stderr": 0.00045666764626669425,
"f1": 0.06536912751677865,
"f1_stderr": 0.001427220169024926,
"acc": 0.47155979662189373,
"acc_stderr": 0.01115073074341337
},
"harness|drop|3": {
"em": 0.0019924496644295304,
"em_stderr": 0.00045666764626669425,
"f1": 0.06536912751677865,
"f1_stderr": 0.001427220169024926
},
"harness|gsm8k|5": {
"acc": 0.17437452615617893,
"acc_stderr": 0.010451421361976233
},
"harness|winogrande|5": {
"acc": 0.7687450670876085,
"acc_stderr": 0.01185004012485051
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_mncai__Mistral-7B-openplatypus-1k
|
[
"region:us"
] |
2023-10-10T10:26:59+00:00
|
{"pretty_name": "Evaluation run of mncai/Mistral-7B-openplatypus-1k", "dataset_summary": "Dataset automatically created during the evaluation run of model [mncai/Mistral-7B-openplatypus-1k](https://huggingface.co/mncai/Mistral-7B-openplatypus-1k) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_mncai__Mistral-7B-openplatypus-1k\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-27T04:31:44.728538](https://huggingface.co/datasets/open-llm-leaderboard/details_mncai__Mistral-7B-openplatypus-1k/blob/main/results_2023-10-27T04-31-44.728538.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.0019924496644295304,\n \"em_stderr\": 0.00045666764626669425,\n \"f1\": 0.06536912751677865,\n \"f1_stderr\": 0.001427220169024926,\n \"acc\": 0.47155979662189373,\n \"acc_stderr\": 0.01115073074341337\n },\n \"harness|drop|3\": {\n \"em\": 0.0019924496644295304,\n \"em_stderr\": 0.00045666764626669425,\n \"f1\": 0.06536912751677865,\n \"f1_stderr\": 0.001427220169024926\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.17437452615617893,\n \"acc_stderr\": 0.010451421361976233\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7687450670876085,\n \"acc_stderr\": 0.01185004012485051\n }\n}\n```", "repo_url": "https://huggingface.co/mncai/Mistral-7B-openplatypus-1k", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|arc:challenge|25_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_27T04_31_44.728538", "path": ["**/details_harness|drop|3_2023-10-27T04-31-44.728538.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-27T04-31-44.728538.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_27T04_31_44.728538", "path": ["**/details_harness|gsm8k|5_2023-10-27T04-31-44.728538.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-27T04-31-44.728538.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hellaswag|10_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T11-26-36.133476.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T11-26-36.133476.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T11-26-36.133476.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_27T04_31_44.728538", "path": ["**/details_harness|winogrande|5_2023-10-27T04-31-44.728538.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-27T04-31-44.728538.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_10T11_26_36.133476", "path": ["results_2023-10-10T11-26-36.133476.parquet"]}, {"split": "2023_10_27T04_31_44.728538", "path": ["results_2023-10-27T04-31-44.728538.parquet"]}, {"split": "latest", "path": ["results_2023-10-27T04-31-44.728538.parquet"]}]}]}
|
2023-10-27T03:31:58+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of mncai/Mistral-7B-openplatypus-1k
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model mncai/Mistral-7B-openplatypus-1k on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-27T04:31:44.728538(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of mncai/Mistral-7B-openplatypus-1k",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model mncai/Mistral-7B-openplatypus-1k on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-27T04:31:44.728538(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of mncai/Mistral-7B-openplatypus-1k",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model mncai/Mistral-7B-openplatypus-1k on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-27T04:31:44.728538(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
24,
31,
172,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of mncai/Mistral-7B-openplatypus-1k## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model mncai/Mistral-7B-openplatypus-1k on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-27T04:31:44.728538(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
7a3cc63ba0c0dfc7b35ed323dcc8bd8462426722
|
# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-q_k_v_o_gate_up_down
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-q_k_v_o_gate_up_down
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-q_k_v_o_gate_up_down](https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-q_k_v_o_gate_up_down) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r4-q_k_v_o_gate_up_down",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-28T17:20:18.070512](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r4-q_k_v_o_gate_up_down/blob/main/results_2023-10-28T17-20-18.070512.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.3329488255033557,
"em_stderr": 0.0048262295177582465,
"f1": 0.37401635906040315,
"f1_stderr": 0.004743003734543155,
"acc": 0.444993426772692,
"acc_stderr": 0.010459654838365608
},
"harness|drop|3": {
"em": 0.3329488255033557,
"em_stderr": 0.0048262295177582465,
"f1": 0.37401635906040315,
"f1_stderr": 0.004743003734543155
},
"harness|gsm8k|5": {
"acc": 0.12282031842304776,
"acc_stderr": 0.009041108602874675
},
"harness|winogrande|5": {
"acc": 0.7671665351223362,
"acc_stderr": 0.011878201073856542
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r4-q_k_v_o_gate_up_down
|
[
"region:us"
] |
2023-10-10T10:32:28+00:00
|
{"pretty_name": "Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-q_k_v_o_gate_up_down", "dataset_summary": "Dataset automatically created during the evaluation run of model [CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-q_k_v_o_gate_up_down](https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-q_k_v_o_gate_up_down) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r4-q_k_v_o_gate_up_down\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-28T17:20:18.070512](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r4-q_k_v_o_gate_up_down/blob/main/results_2023-10-28T17-20-18.070512.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.3329488255033557,\n \"em_stderr\": 0.0048262295177582465,\n \"f1\": 0.37401635906040315,\n \"f1_stderr\": 0.004743003734543155,\n \"acc\": 0.444993426772692,\n \"acc_stderr\": 0.010459654838365608\n },\n \"harness|drop|3\": {\n \"em\": 0.3329488255033557,\n \"em_stderr\": 0.0048262295177582465,\n \"f1\": 0.37401635906040315,\n \"f1_stderr\": 0.004743003734543155\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.12282031842304776,\n \"acc_stderr\": 0.009041108602874675\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7671665351223362,\n \"acc_stderr\": 0.011878201073856542\n }\n}\n```", "repo_url": "https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-q_k_v_o_gate_up_down", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|arc:challenge|25_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_28T17_20_18.070512", "path": ["**/details_harness|drop|3_2023-10-28T17-20-18.070512.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-28T17-20-18.070512.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_28T17_20_18.070512", "path": ["**/details_harness|gsm8k|5_2023-10-28T17-20-18.070512.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-28T17-20-18.070512.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hellaswag|10_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T11-32-04.979499.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T11-32-04.979499.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T11-32-04.979499.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_28T17_20_18.070512", "path": ["**/details_harness|winogrande|5_2023-10-28T17-20-18.070512.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-28T17-20-18.070512.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_10T11_32_04.979499", "path": ["results_2023-10-10T11-32-04.979499.parquet"]}, {"split": "2023_10_28T17_20_18.070512", "path": ["results_2023-10-28T17-20-18.070512.parquet"]}, {"split": "latest", "path": ["results_2023-10-28T17-20-18.070512.parquet"]}]}]}
|
2023-10-28T16:20:30+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-q_k_v_o_gate_up_down
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-q_k_v_o_gate_up_down on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-28T17:20:18.070512(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-q_k_v_o_gate_up_down",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-q_k_v_o_gate_up_down on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-28T17:20:18.070512(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-q_k_v_o_gate_up_down",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-q_k_v_o_gate_up_down on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-28T17:20:18.070512(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
44,
31,
192,
67,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-q_k_v_o_gate_up_down## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-q_k_v_o_gate_up_down on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-28T17:20:18.070512(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
b728202412697673db821cd3bee275ad821ecc31
|
# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-gate_up_down
## Dataset Description
- **Homepage:**
- **Repository:** https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-gate_up_down
- **Paper:**
- **Leaderboard:** https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard
- **Point of Contact:** [email protected]
### Dataset Summary
Dataset automatically created during the evaluation run of model [CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-gate_up_down](https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-gate_up_down) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r4-gate_up_down",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-10-26T15:53:08.381645](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r4-gate_up_down/blob/main/results_2023-10-26T15-53-08.381645.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"em": 0.2154991610738255,
"em_stderr": 0.004210747014430766,
"f1": 0.25919148489932897,
"f1_stderr": 0.004195696877017449,
"acc": 0.4490387889225113,
"acc_stderr": 0.01073317504472215
},
"harness|drop|3": {
"em": 0.2154991610738255,
"em_stderr": 0.004210747014430766,
"f1": 0.25919148489932897,
"f1_stderr": 0.004195696877017449
},
"harness|gsm8k|5": {
"acc": 0.1372251705837756,
"acc_stderr": 0.009477808244600398
},
"harness|winogrande|5": {
"acc": 0.760852407261247,
"acc_stderr": 0.011988541844843905
}
}
```
### Supported Tasks and Leaderboards
[More Information Needed]
### Languages
[More Information Needed]
## Dataset Structure
### Data Instances
[More Information Needed]
### Data Fields
[More Information Needed]
### Data Splits
[More Information Needed]
## Dataset Creation
### Curation Rationale
[More Information Needed]
### Source Data
#### Initial Data Collection and Normalization
[More Information Needed]
#### Who are the source language producers?
[More Information Needed]
### Annotations
#### Annotation process
[More Information Needed]
#### Who are the annotators?
[More Information Needed]
### Personal and Sensitive Information
[More Information Needed]
## Considerations for Using the Data
### Social Impact of Dataset
[More Information Needed]
### Discussion of Biases
[More Information Needed]
### Other Known Limitations
[More Information Needed]
## Additional Information
### Dataset Curators
[More Information Needed]
### Licensing Information
[More Information Needed]
### Citation Information
[More Information Needed]
### Contributions
[More Information Needed]
|
open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r4-gate_up_down
|
[
"region:us"
] |
2023-10-10T10:38:47+00:00
|
{"pretty_name": "Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-gate_up_down", "dataset_summary": "Dataset automatically created during the evaluation run of model [CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-gate_up_down](https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-gate_up_down) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r4-gate_up_down\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-10-26T15:53:08.381645](https://huggingface.co/datasets/open-llm-leaderboard/details_CHIH-HUNG__llama-2-13b-FINETUNE5_4w-r4-gate_up_down/blob/main/results_2023-10-26T15-53-08.381645.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"em\": 0.2154991610738255,\n \"em_stderr\": 0.004210747014430766,\n \"f1\": 0.25919148489932897,\n \"f1_stderr\": 0.004195696877017449,\n \"acc\": 0.4490387889225113,\n \"acc_stderr\": 0.01073317504472215\n },\n \"harness|drop|3\": {\n \"em\": 0.2154991610738255,\n \"em_stderr\": 0.004210747014430766,\n \"f1\": 0.25919148489932897,\n \"f1_stderr\": 0.004195696877017449\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.1372251705837756,\n \"acc_stderr\": 0.009477808244600398\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.760852407261247,\n \"acc_stderr\": 0.011988541844843905\n }\n}\n```", "repo_url": "https://huggingface.co/CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-gate_up_down", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|arc:challenge|25_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_drop_3", "data_files": [{"split": "2023_10_26T15_53_08.381645", "path": ["**/details_harness|drop|3_2023-10-26T15-53-08.381645.parquet"]}, {"split": "latest", "path": ["**/details_harness|drop|3_2023-10-26T15-53-08.381645.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_10_26T15_53_08.381645", "path": ["**/details_harness|gsm8k|5_2023-10-26T15-53-08.381645.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-10-26T15-53-08.381645.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hellaswag|10_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-management|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-virology|5_2023-10-10T11-38-23.134636.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T11-38-23.134636.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-10-10T11-38-23.134636.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_10_26T15_53_08.381645", "path": ["**/details_harness|winogrande|5_2023-10-26T15-53-08.381645.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-10-26T15-53-08.381645.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_10_10T11_38_23.134636", "path": ["results_2023-10-10T11-38-23.134636.parquet"]}, {"split": "2023_10_26T15_53_08.381645", "path": ["results_2023-10-26T15-53-08.381645.parquet"]}, {"split": "latest", "path": ["results_2023-10-26T15-53-08.381645.parquet"]}]}]}
|
2023-10-26T14:53:22+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-gate_up_down
## Dataset Description
- Homepage:
- Repository: URL
- Paper:
- Leaderboard: URL
- Point of Contact: clementine@URL
### Dataset Summary
Dataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-gate_up_down on the Open LLM Leaderboard.
The dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-10-26T15:53:08.381645(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
### Supported Tasks and Leaderboards
### Languages
## Dataset Structure
### Data Instances
### Data Fields
### Data Splits
## Dataset Creation
### Curation Rationale
### Source Data
#### Initial Data Collection and Normalization
#### Who are the source language producers?
### Annotations
#### Annotation process
#### Who are the annotators?
### Personal and Sensitive Information
## Considerations for Using the Data
### Social Impact of Dataset
### Discussion of Biases
### Other Known Limitations
## Additional Information
### Dataset Curators
### Licensing Information
### Contributions
|
[
"# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-gate_up_down",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-gate_up_down on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-26T15:53:08.381645(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-gate_up_down",
"## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL",
"### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-gate_up_down on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-10-26T15:53:08.381645(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"### Supported Tasks and Leaderboards",
"### Languages",
"## Dataset Structure",
"### Data Instances",
"### Data Fields",
"### Data Splits",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Initial Data Collection and Normalization",
"#### Who are the source language producers?",
"### Annotations",
"#### Annotation process",
"#### Who are the annotators?",
"### Personal and Sensitive Information",
"## Considerations for Using the Data",
"### Social Impact of Dataset",
"### Discussion of Biases",
"### Other Known Limitations",
"## Additional Information",
"### Dataset Curators",
"### Licensing Information",
"### Contributions"
] |
[
6,
36,
31,
184,
66,
10,
4,
6,
6,
5,
5,
5,
7,
4,
10,
10,
5,
5,
9,
8,
8,
7,
8,
7,
5,
6,
6,
5
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-gate_up_down## Dataset Description\n\n- Homepage: \n- Repository: URL\n- Paper: \n- Leaderboard: URL\n- Point of Contact: clementine@URL### Dataset Summary\n\nDataset automatically created during the evaluation run of model CHIH-HUNG/llama-2-13b-FINETUNE5_4w-r4-gate_up_down on the Open LLM Leaderboard.\n\nThe dataset is composed of 64 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the agregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-10-26T15:53:08.381645(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):### Supported Tasks and Leaderboards### Languages## Dataset Structure### Data Instances### Data Fields### Data Splits## Dataset Creation### Curation Rationale### Source Data#### Initial Data Collection and Normalization#### Who are the source language producers?### Annotations#### Annotation process#### Who are the annotators?### Personal and Sensitive Information## Considerations for Using the Data### Social Impact of Dataset### Discussion of Biases### Other Known Limitations## Additional Information### Dataset Curators### Licensing Information### Contributions"
] |
6a1c2a328f03d53022e9fc22c37afe2bcfe6c406
|
# Dataset Card for "old_python"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
loubnabnl/old_python
|
[
"region:us"
] |
2023-10-10T10:53:34+00:00
|
{"dataset_info": {"features": [{"name": "__id__", "dtype": "int64"}, {"name": "blob_id", "dtype": "string"}, {"name": "directory_id", "dtype": "string"}, {"name": "path", "dtype": "string"}, {"name": "content_id", "dtype": "string"}, {"name": "detected_licenses", "sequence": "string"}, {"name": "license_type", "dtype": "string"}, {"name": "repo_name", "dtype": "string"}, {"name": "repo_url", "dtype": "string"}, {"name": "snapshot_id", "dtype": "string"}, {"name": "revision_id", "dtype": "string"}, {"name": "branch_name", "dtype": "string"}, {"name": "visit_date", "dtype": "timestamp[ns]"}, {"name": "revision_date", "dtype": "timestamp[ns]"}, {"name": "committer_date", "dtype": "timestamp[ns]"}, {"name": "github_id", "dtype": "int64"}, {"name": "star_events_count", "dtype": "int64"}, {"name": "fork_events_count", "dtype": "int64"}, {"name": "gha_license_id", "dtype": "string"}, {"name": "gha_fork", "dtype": "bool"}, {"name": "gha_event_created_at", "dtype": "timestamp[ns]"}, {"name": "gha_created_at", "dtype": "timestamp[ns]"}, {"name": "gha_updated_at", "dtype": "timestamp[ns]"}, {"name": "gha_pushed_at", "dtype": "timestamp[ns]"}, {"name": "gha_size", "dtype": "int64"}, {"name": "gha_stargazers_count", "dtype": "int32"}, {"name": "gha_forks_count", "dtype": "int32"}, {"name": "gha_open_issues_count", "dtype": "int32"}, {"name": "gha_language", "dtype": "string"}, {"name": "gha_archived", "dtype": "bool"}, {"name": "gha_disabled", "dtype": "bool"}, {"name": "content", "dtype": "string"}, {"name": "src_encoding", "dtype": "string"}, {"name": "language", "dtype": "string"}, {"name": "is_vendor", "dtype": "bool"}, {"name": "is_generated", "dtype": "bool"}, {"name": "year", "dtype": "int64"}], "splits": [{"name": "train", "num_bytes": 205861897.66555908, "num_examples": 42509}], "download_size": 91464746, "dataset_size": 205861897.66555908}}
|
2023-10-10T10:53:50+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "old_python"
More Information needed
|
[
"# Dataset Card for \"old_python\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"old_python\"\n\nMore Information needed"
] |
[
6,
14
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"old_python\"\n\nMore Information needed"
] |
cd76a5662002d6000b325515798141569e71c06d
|
# Dataset Card for "processed_bert_dataset"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
|
Ramzey/processed_bert_dataset
|
[
"region:us"
] |
2023-10-10T10:56:07+00:00
|
{"dataset_info": {"features": [{"name": "input_ids", "sequence": "int32"}, {"name": "token_type_ids", "sequence": "int8"}, {"name": "attention_mask", "sequence": "int8"}, {"name": "special_tokens_mask", "sequence": "int8"}], "splits": [{"name": "train", "num_bytes": 576000.0, "num_examples": 160}], "download_size": 0, "dataset_size": 576000.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]}
|
2023-10-10T12:02:03+00:00
|
[] |
[] |
TAGS
#region-us
|
# Dataset Card for "processed_bert_dataset"
More Information needed
|
[
"# Dataset Card for \"processed_bert_dataset\"\n\nMore Information needed"
] |
[
"TAGS\n#region-us \n",
"# Dataset Card for \"processed_bert_dataset\"\n\nMore Information needed"
] |
[
6,
17
] |
[
"passage: TAGS\n#region-us \n# Dataset Card for \"processed_bert_dataset\"\n\nMore Information needed"
] |
3b22a6a8b08fa444426e3c10a8eb5dfb021b472b
|
450 quotes by 9 philosophers (50 quotes each), labeled with the author and with a variable number of topic tags.
The quotes originally come from https://www.kaggle.com/datasets/mertbozkurt5/quotes-by-philosophers (CC BY-NC-SA 4.0).
The text of each quote has been cleaned of soft-hyphens (`\xad`) and other weird characters.
The topic labeling has been done with a default HuggingFace zero-shot classifier pipeline with multi_labels.
|
datastax/philosopher-quotes
|
[
"task_categories:conversational",
"size_categories:n<1K",
"language:en",
"license:cc",
"code",
"region:us"
] |
2023-10-10T11:02:19+00:00
|
{"language": ["en"], "license": "cc", "size_categories": ["n<1K"], "task_categories": ["conversational"], "pretty_name": "Philosophers Quotes", "tags": ["code"]}
|
2023-10-11T06:55:38+00:00
|
[] |
[
"en"
] |
TAGS
#task_categories-conversational #size_categories-n<1K #language-English #license-cc #code #region-us
|
450 quotes by 9 philosophers (50 quotes each), labeled with the author and with a variable number of topic tags.
The quotes originally come from URL (CC BY-NC-SA 4.0).
The text of each quote has been cleaned of soft-hyphens ('\xad') and other weird characters.
The topic labeling has been done with a default HuggingFace zero-shot classifier pipeline with multi_labels.
|
[] |
[
"TAGS\n#task_categories-conversational #size_categories-n<1K #language-English #license-cc #code #region-us \n"
] |
[
37
] |
[
"passage: TAGS\n#task_categories-conversational #size_categories-n<1K #language-English #license-cc #code #region-us \n"
] |
eb4d9319b175a0159fa979ede043ebb32ea2945f
|
This is a first pass at generating MathJSON formulations of math problems to allow deterministic calculations (via cortex-js/compute-engine).
LLMs are decent at problem formulation, but terrible at calculations, especially things like calculating cosine of R radians, floating point with high precision multiplication, etc. Let's let LLMs do what they are good at and run the computation outside.
|
jondurbin/mathjson-alpha
|
[
"license:apache-2.0",
"region:us"
] |
2023-10-10T11:10:29+00:00
|
{"license": "apache-2.0", "datasets": ["gsm8k", "meta-math/MetaMathQA"]}
|
2023-10-10T11:15:58+00:00
|
[] |
[] |
TAGS
#license-apache-2.0 #region-us
|
This is a first pass at generating MathJSON formulations of math problems to allow deterministic calculations (via cortex-js/compute-engine).
LLMs are decent at problem formulation, but terrible at calculations, especially things like calculating cosine of R radians, floating point with high precision multiplication, etc. Let's let LLMs do what they are good at and run the computation outside.
|
[] |
[
"TAGS\n#license-apache-2.0 #region-us \n"
] |
[
14
] |
[
"passage: TAGS\n#license-apache-2.0 #region-us \n"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.