sha
stringlengths 40
40
| text
stringlengths 1
13.4M
| id
stringlengths 2
117
| tags
listlengths 1
7.91k
| created_at
stringlengths 25
25
| metadata
stringlengths 2
875k
| last_modified
stringlengths 25
25
| arxiv
listlengths 0
25
| languages
listlengths 0
7.91k
| tags_str
stringlengths 17
159k
| text_str
stringlengths 1
447k
| text_lists
listlengths 0
352
| processed_texts
listlengths 1
353
| tokens_length
listlengths 1
353
| input_texts
listlengths 1
40
|
---|---|---|---|---|---|---|---|---|---|---|---|---|---|---|
c76c11902591fb1c341eb7ad8c49b3aa4d0c7262 |
## Description
A channel made to feature the "LaVie" text-to-video model
## Model
LaVie
## Voice
Julian
## Orientation
Landscape
# Tags
- LaVie
# Style
beautiful, sharp, detailed
# Music
mainstream house music
## Prompt
A channel generating demo videos of about 2 to 3 minutes each
It works like a demoreel, it shows video footage from various things (industrial, objects, outsides, landscapes, beach, mountain, forest, desert, busy city, suburban, aquatic etc)
| jbilcke-hf/ai-tube-la-vie-only | [
"license:cc-by-nc-4.0",
"region:us"
] | 2023-12-21T12:19:14+00:00 | {"license": "cc-by-nc-4.0", "pretty_name": "La Vie"} | 2023-12-21T12:22:27+00:00 | [] | [] | TAGS
#license-cc-by-nc-4.0 #region-us
|
## Description
A channel made to feature the "LaVie" text-to-video model
## Model
LaVie
## Voice
Julian
## Orientation
Landscape
# Tags
- LaVie
# Style
beautiful, sharp, detailed
# Music
mainstream house music
## Prompt
A channel generating demo videos of about 2 to 3 minutes each
It works like a demoreel, it shows video footage from various things (industrial, objects, outsides, landscapes, beach, mountain, forest, desert, busy city, suburban, aquatic etc)
| [
"## Description\n\nA channel made to feature the \"LaVie\" text-to-video model",
"## Model\n\nLaVie",
"## Voice\n\nJulian",
"## Orientation\n\nLandscape",
"# Tags\n\n- LaVie",
"# Style\n\nbeautiful, sharp, detailed",
"# Music\n\nmainstream house music",
"## Prompt\n\nA channel generating demo videos of about 2 to 3 minutes each\nIt works like a demoreel, it shows video footage from various things (industrial, objects, outsides, landscapes, beach, mountain, forest, desert, busy city, suburban, aquatic etc)"
] | [
"TAGS\n#license-cc-by-nc-4.0 #region-us \n",
"## Description\n\nA channel made to feature the \"LaVie\" text-to-video model",
"## Model\n\nLaVie",
"## Voice\n\nJulian",
"## Orientation\n\nLandscape",
"# Tags\n\n- LaVie",
"# Style\n\nbeautiful, sharp, detailed",
"# Music\n\nmainstream house music",
"## Prompt\n\nA channel generating demo videos of about 2 to 3 minutes each\nIt works like a demoreel, it shows video footage from various things (industrial, objects, outsides, landscapes, beach, mountain, forest, desert, busy city, suburban, aquatic etc)"
] | [
17,
19,
5,
3,
5,
6,
7,
5,
65
] | [
"passage: TAGS\n#license-cc-by-nc-4.0 #region-us \n## Description\n\nA channel made to feature the \"LaVie\" text-to-video model## Model\n\nLaVie## Voice\n\nJulian## Orientation\n\nLandscape# Tags\n\n- LaVie# Style\n\nbeautiful, sharp, detailed# Music\n\nmainstream house music## Prompt\n\nA channel generating demo videos of about 2 to 3 minutes each\nIt works like a demoreel, it shows video footage from various things (industrial, objects, outsides, landscapes, beach, mountain, forest, desert, busy city, suburban, aquatic etc)"
] |
6a7987ddfcd4ea8e954731d8c75461244d2dbaa6 | * epicff series: Based on epicphotogasm, using dreambooth to finetuning. Nice outputs, but a little stiff.
* epicmq : I don't remember.
* htc: Based on SD-1.5 pruned (my mistake), using dreambooth to finetuning. Very creative model, but very hard to create good images by itself.
* merged_ft: Based on a mix of SD-1.5 full (7GB) with 70% on epicphotogasm for structure, using Novel AI finetuning. Good mix between flexibility and polish.
| halftimecoder/sd-orgasmic-c1 | [
"region:us"
] | 2023-12-21T12:42:19+00:00 | {} | 2024-01-25T16:01:27+00:00 | [] | [] | TAGS
#region-us
| * epicff series: Based on epicphotogasm, using dreambooth to finetuning. Nice outputs, but a little stiff.
* epicmq : I don't remember.
* htc: Based on SD-1.5 pruned (my mistake), using dreambooth to finetuning. Very creative model, but very hard to create good images by itself.
* merged_ft: Based on a mix of SD-1.5 full (7GB) with 70% on epicphotogasm for structure, using Novel AI finetuning. Good mix between flexibility and polish.
| [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
54cc508ce569fee58a3ad9863cb5b928f4cc5b7a | # BLOSSOM MATH V3
### 介绍
Blossom Math V3是基于Math23K和GSM8K衍生而来的中英双语数学对话数据集,适用于数学问题微调。
相比于blossom-math-v2,进一步优化了数据处理流程,并强化答案检查。
本数据集采用全量Math23K、GSM8K和翻译后的GSM8K的问题,随后调用gpt-3.5-turbo-0613生成结果,并使用原始数据集中的答案对生成的结果进行验证,过滤掉错误答案,很大程度上保证了问题和答案的准确性。
本次发布了全量数据的25%,包含10K记录。
### 语言
中文和英文
### 数据集结构
每条数据代表一个完整的题目及答案,包含id、input、output、answer、dataset四个字段。
- id:字符串,代表原始数据集中的题目id,与dataset字段结合可确定唯一题目。
- input:字符串,代表问题。
- output:字符串,代表gpt-3.5-turbo-0613生成的答案。
- answer:字符串,代表正确答案。
- dataset:字符串,代表原始数据集。
### 数据集限制
本数据集的所有响应均由gpt-3.5-turbo-0613生成,并经过初步校验,但仍可能包含不准确的回答。 | Azure99/blossom-math-v3 | [
"task_categories:text-generation",
"task_categories:text2text-generation",
"size_categories:10K<n<100K",
"language:zh",
"language:en",
"license:apache-2.0",
"region:us"
] | 2023-12-21T12:52:01+00:00 | {"language": ["zh", "en"], "license": "apache-2.0", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation", "text2text-generation"]} | 2023-12-23T04:20:33+00:00 | [] | [
"zh",
"en"
] | TAGS
#task_categories-text-generation #task_categories-text2text-generation #size_categories-10K<n<100K #language-Chinese #language-English #license-apache-2.0 #region-us
| # BLOSSOM MATH V3
### 介绍
Blossom Math V3是基于Math23K和GSM8K衍生而来的中英双语数学对话数据集,适用于数学问题微调。
相比于blossom-math-v2,进一步优化了数据处理流程,并强化答案检查。
本数据集采用全量Math23K、GSM8K和翻译后的GSM8K的问题,随后调用gpt-3.5-turbo-0613生成结果,并使用原始数据集中的答案对生成的结果进行验证,过滤掉错误答案,很大程度上保证了问题和答案的准确性。
本次发布了全量数据的25%,包含10K记录。
### 语言
中文和英文
### 数据集结构
每条数据代表一个完整的题目及答案,包含id、input、output、answer、dataset四个字段。
- id:字符串,代表原始数据集中的题目id,与dataset字段结合可确定唯一题目。
- input:字符串,代表问题。
- output:字符串,代表gpt-3.5-turbo-0613生成的答案。
- answer:字符串,代表正确答案。
- dataset:字符串,代表原始数据集。
### 数据集限制
本数据集的所有响应均由gpt-3.5-turbo-0613生成,并经过初步校验,但仍可能包含不准确的回答。 | [
"# BLOSSOM MATH V3",
"### 介绍\n\nBlossom Math V3是基于Math23K和GSM8K衍生而来的中英双语数学对话数据集,适用于数学问题微调。\n\n相比于blossom-math-v2,进一步优化了数据处理流程,并强化答案检查。\n\n本数据集采用全量Math23K、GSM8K和翻译后的GSM8K的问题,随后调用gpt-3.5-turbo-0613生成结果,并使用原始数据集中的答案对生成的结果进行验证,过滤掉错误答案,很大程度上保证了问题和答案的准确性。\n\n本次发布了全量数据的25%,包含10K记录。",
"### 语言\n\n中文和英文",
"### 数据集结构\n\n每条数据代表一个完整的题目及答案,包含id、input、output、answer、dataset四个字段。\n\n- id:字符串,代表原始数据集中的题目id,与dataset字段结合可确定唯一题目。\n- input:字符串,代表问题。\n- output:字符串,代表gpt-3.5-turbo-0613生成的答案。\n- answer:字符串,代表正确答案。\n- dataset:字符串,代表原始数据集。",
"### 数据集限制\n\n本数据集的所有响应均由gpt-3.5-turbo-0613生成,并经过初步校验,但仍可能包含不准确的回答。"
] | [
"TAGS\n#task_categories-text-generation #task_categories-text2text-generation #size_categories-10K<n<100K #language-Chinese #language-English #license-apache-2.0 #region-us \n",
"# BLOSSOM MATH V3",
"### 介绍\n\nBlossom Math V3是基于Math23K和GSM8K衍生而来的中英双语数学对话数据集,适用于数学问题微调。\n\n相比于blossom-math-v2,进一步优化了数据处理流程,并强化答案检查。\n\n本数据集采用全量Math23K、GSM8K和翻译后的GSM8K的问题,随后调用gpt-3.5-turbo-0613生成结果,并使用原始数据集中的答案对生成的结果进行验证,过滤掉错误答案,很大程度上保证了问题和答案的准确性。\n\n本次发布了全量数据的25%,包含10K记录。",
"### 语言\n\n中文和英文",
"### 数据集结构\n\n每条数据代表一个完整的题目及答案,包含id、input、output、answer、dataset四个字段。\n\n- id:字符串,代表原始数据集中的题目id,与dataset字段结合可确定唯一题目。\n- input:字符串,代表问题。\n- output:字符串,代表gpt-3.5-turbo-0613生成的答案。\n- answer:字符串,代表正确答案。\n- dataset:字符串,代表原始数据集。",
"### 数据集限制\n\n本数据集的所有响应均由gpt-3.5-turbo-0613生成,并经过初步校验,但仍可能包含不准确的回答。"
] | [
59,
8,
143,
8,
116,
39
] | [
"passage: TAGS\n#task_categories-text-generation #task_categories-text2text-generation #size_categories-10K<n<100K #language-Chinese #language-English #license-apache-2.0 #region-us \n# BLOSSOM MATH V3### 介绍\n\nBlossom Math V3是基于Math23K和GSM8K衍生而来的中英双语数学对话数据集,适用于数学问题微调。\n\n相比于blossom-math-v2,进一步优化了数据处理流程,并强化答案检查。\n\n本数据集采用全量Math23K、GSM8K和翻译后的GSM8K的问题,随后调用gpt-3.5-turbo-0613生成结果,并使用原始数据集中的答案对生成的结果进行验证,过滤掉错误答案,很大程度上保证了问题和答案的准确性。\n\n本次发布了全量数据的25%,包含10K记录。### 语言\n\n中文和英文### 数据集结构\n\n每条数据代表一个完整的题目及答案,包含id、input、output、answer、dataset四个字段。\n\n- id:字符串,代表原始数据集中的题目id,与dataset字段结合可确定唯一题目。\n- input:字符串,代表问题。\n- output:字符串,代表gpt-3.5-turbo-0613生成的答案。\n- answer:字符串,代表正确答案。\n- dataset:字符串,代表原始数据集。### 数据集限制\n\n本数据集的所有响应均由gpt-3.5-turbo-0613生成,并经过初步校验,但仍可能包含不准确的回答。"
] |
9da29788b22a21fcbc21871eb970ba5dc813db0a | Have you ever looked at PIPPA and think it is too big? Well this dataset is for you, PIPPA but smaller by removing any convo <= 8 messages + user response is too short.
| Chat-Error/Squippa | [
"region:us"
] | 2023-12-21T13:21:23+00:00 | {} | 2023-12-21T16:13:04+00:00 | [] | [] | TAGS
#region-us
| Have you ever looked at PIPPA and think it is too big? Well this dataset is for you, PIPPA but smaller by removing any convo <= 8 messages + user response is too short.
| [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
f1da037f5107f8f2c3d4980bcc374d131c2605f2 | # 🍏 Applio-RVC-Fork
> [!NOTE]
> Applio-RVC-Fork is designed to complement existing repositories, and as such, certain features may be in experimental stages, potentially containing bugs. Additionally, there might be instances of coding practices that could be improved or refined. It is not intended to replace any other repository.
[](https://discord.gg/IAHispano) [](https://bot.applio.org) [](https://docs.applio.org)
## 📚 Table of Contents
_This README has been enhanced by incorporating the features introduced in Applio-RVC-Fork to the original [Mangio-RVC-Fork README](https://github.com/Mangio621/Mangio-RVC-Fork/blob/main/README.md), along with additional details and explanations._
1. [Improvements of Applio Over RVC](#-improvements-of-applio-rvc-fork-over-rvc)
2. [Additional Features of This Repository](#️-additional-features-of-this-repository)
3. [Todo Tasks](#-todo-tasks)
4. [Installation](#-installation)
5. [Running the Web GUI (Inference & Train)](#-running-the-web-gui-inference--train)
6. [Running the CLI (Inference & Train)](#-running-the-cli-inference--train)
7. [Credits](#-credits)
8. [Thanks to all RVC, Mangio and Applio contributors](#-thanks-to-all-rvc-mangio-and-applio-contributors)
## 🎯 Improvements of Applio-RVC-Fork Over RVC
_The comparisons are with respect to the original [Retrieval-based-Voice-Conversion-WebUI](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI) repository._
### f0 Inference Algorithm Overhaul
- Applio features a comprehensive overhaul of the f0 inference algorithm, including:
- Addition of the pyworld dio f0 method.
- Alternative method for calculating crepe f0.
- Introduction of the torchcrepe crepe-tiny model.
- Customizable crepe_hop_length for the crepe algorithm via both the web GUI and CLI.
### f0 Crepe Pitch Extraction for Training
- Works on paperspace machines but not local MacOS/Windows machines (Potential memory leak).
### Paperspace Integration (Under maintenance, so it cannot be used for the moment.)
- Applio seamlessly integrates with Paperspace, providing the following features:
- Paperspace argument on infer-web.py (--paperspace) for sharing a Gradio link.
- A dedicated make file tailored for Paperspace users.
### Access to Tensorboard
- Applio grants easy access to Tensorboard via a Makefile and a Python script.
### CLI Functionality
- Applio introduces command-line interface (CLI) functionality, with the addition of the --cli flag in infer-web.py for CLI system usage.
### f0 Hybrid Estimation Method
- Applio offers a novel f0 hybrid estimation method by calculating nanmedian for a specified array of f0 methods, ensuring the best results from multiple methods (CLI exclusive).
- This hybrid estimation method is also available for f0 feature extraction during training.
### UI Changes
#### Inference:
- A complete interface redesign enhances user experience, with notable features such as:
- Audio recording directly from the interface.
- Convenient drop-down menus for audio and .index file selection.
- An advanced settings section with new features like autotune and formant shifting.
#### Training:
- Improved training features include:
- A total epoch slider now limited to 10,000.
- Increased save frequency limit to 100.
- Default recommended options for smoother setup.
- Better adaptation to high-resolution screens.
- A drop-down menu for dataset selection.
- Enhanced saving system options, including Save all files, Save G and D files, and Save model for inference.
#### UVR:
- Applio ensures compatibility with all VR/MDX models for an extended range of possibilities.
#### TTS (Text-to-Speech, New):
- Introducing a new Text-to-Speech (TTS) feature using RVC models.
- Support for multiple languages and Edge-tts/Google-tts.
#### Resources (New):
- Users can now upload models, backups, datasets, and audios from various storage services like Drive, Huggingface, Discord, and more.
- Download audios from YouTube with the ability to automatically separate instrumental and vocals, offering advanced options and UVR support.
#### Extra (New):
- Combine instrumental and vocals with ease, including independent volume control for each track and the option to add effects like reverb, compressor, and noise gate.
- Significant improvements in the processing interface, allowing tasks such as merging models, modifying information, obtaining information, or extracting models effortlessly.
## ⚙️ Additional Features of This Repository
In addition to the aforementioned improvements, this repository offers the following features:
### Enhanced Tone Leakage Reduction
- Implements tone leakage reduction by replacing source features with training-set features using top1 retrieval. This helps in achieving cleaner audio results.
### Efficient Training
- Provides a seamless and speedy training experience, even on relatively modest graphics cards. The system is optimized for efficient resource utilization.
### Data Efficiency
- Supports training with a small dataset, yielding commendable results, especially with audio clips of at least 10 minutes of low-noise speech.
### Overtraining Detection
- This feature keeps track of the current progress trend and stops the training if no improvement is found after 100 epochs.
- During the 100 epochs with no improvement, no progress is saved. This allows you to continue training from the best-found epoch.
- A `.pth` file of the best epoch is saved in the logs folder under `name_[epoch].pth`, and in the weights folder as `name_fittest.pth`. These files are the same.
### Mode Collapse Detection
- This feature restarts training before a mode collapse by lowering the batch size until it can progress past the mode collapse.
- If a mode collapse is overcome but another one occurs later, it will reset the batch size to its initial setting. This helps maintain training speed when dealing with multiple collapses.
## 📝 Todo Tasks
- [ ] **Investigate GPU Detection Issue:** Address the GPU detection problem and ensure proper utilization of Nvidia GPU.
- [ ] **Fix Mode Collapse Prevention Feature:** Refine the mode collapse prevention feature to maintain graph consistency during retraining.
- [ ] **Resolve CUDA Compatibility Issue:** Investigate and resolve the cuFFT error related to CUDA compatibility.
- [ ] **Refactor infer-web.py:** Organize the code of infer-web.py into different files for each tab, enhancing modularity.
- [ ] **Expand UVR Model Options:** Integrate additional UVR models to provide users with more options and flexibility.
- [ ] **Enhance Installation Process:** Improve the system installation process for better user experience and clarity.
- [ ] **Implement Automatic Updates:** Add automatic update functionality to keep the application current with the latest features.
- [ ] **Multilingual Support:** Include more translations for various languages.
- [ ] **Diversify TTS Methods:** Introduce new TTS methods and enhance customization options for a richer user experience.
- [ ] **CLI Improvement:** Enhance the CLI functionality and introduce a pipeline for a more streamlined user experience.
- [ ] **Dependency Updates:** Keep dependencies up-to-date by regularly updating to the latest versions.
- [ ] **Dataset Creation Assistant:** Develop an assistant for creating datasets to simplify and guide users through the process.
## ✨ Installation
### Automatic installation (Windows):
To quickly and effortlessly install Applio along with all the necessary models and configurations on Windows, you can use the [install_Applio.bat](https://github.com/IAHispano/Applio-RVC-Fork/releases) script available in the releases section.
### Manual installation (Windows/MacOS):
**Note for MacOS Users**: When using `faiss 1.7.2` under MacOS, you may encounter a Segmentation Fault: 11 error. To resolve this issue, install `faiss-cpu 1.7.0` using the following command if you're installing it manually with pip:
```bash
pip install faiss-cpu==1.7.0
```
Additionally, you can install Swig on MacOS using brew:
```bash
brew install swig
```
Install requirements:
_Before this install ffmpeg, wget, git and python (This fork just works with 3.9.X on Linux)_
```bash
wget https://github.com/IAHispano/Applio-RVC-Fork/releases/download/v2.0.0/install_Applio-linux.sh
chmod +x install_Applio-linux.sh && ./install_Applio-linux.sh
```
### Manual installation (Paperspace):
```bash
cd Applio-RVC-Fork
make install # Do this everytime you start your paperspace machine
```
## 🪄 Running the Web GUI (Inference & Train)
_Use --paperspace or --colab if on cloud system._
```bash
python infer-web.py --pycmd python --port 3000
```
## 💻 Running the CLI (Inference & Train)
```bash
python infer-web.py --pycmd python --cli
```
```bash
Applio-RVC-Fork CLI
Welcome to the CLI version of RVC. Please read the documentation on README.MD to understand how to use this app.
You are currently in 'HOME':
go home : Takes you back to home with a navigation list.
go infer : Takes you to inference command execution.
go pre-process : Takes you to training step.1) pre-process command execution.
go extract-feature : Takes you to training step.2) extract-feature command execution.
go train : Takes you to training step.3) being or continue training command execution.
go train-feature : Takes you to the train feature index command execution.
go extract-model : Takes you to the extract small model command execution.
HOME:
```
Typing 'go infer' for example will take you to the infer page where you can then enter in your arguments that you wish to use for that specific page. For example typing 'go infer' will take you here:
```bash
HOME: go infer
You are currently in 'INFER':
arg 1) model name with .pth in ./weights: mi-test.pth
arg 2) source audio path: myFolder\MySource.wav
arg 3) output file name to be placed in './audio-outputs': MyTest.wav
arg 4) feature index file path: logs/mi-test/added_IVF3042_Flat_nprobe_1.index
arg 5) speaker id: 0
arg 6) transposition: 0
arg 7) f0 method: harvest (pm, harvest, crepe, crepe-tiny)
arg 8) crepe hop length: 160
arg 9) harvest median filter radius: 3 (0-7)
arg 10) post resample rate: 0
arg 11) mix volume envelope: 1
arg 12) feature index ratio: 0.78 (0-1)
arg 13) Voiceless Consonant Protection (Less Artifact): 0.33 (Smaller number = more protection. 0.50 means Dont Use.)
Example: mi-test.pth saudio/Sidney.wav myTest.wav logs/mi-test/added_index.index 0 -2 harvest 160 3 0 1 0.95 0.33
INFER: <INSERT ARGUMENTS HERE OR COPY AND PASTE THE EXAMPLE>
```
## 🏆 Credits
Applio owes its existence to the collaborative efforts of various repositories, including Mangio-RVC-Fork, and all the other credited contributors. Without their contributions, Applio would not have been possible. Therefore, we kindly request that if you appreciate the work we've accomplished, you consider exploring the projects mentioned in our credits.
Our goal is not to supplant RVC or Mangio; rather, we aim to provide a contemporary and up-to-date alternative for the entire community.
- [VITS](https://github.com/jaywalnut310/vits) by jaywalnut310
- [Retrieval-based-Voice-Conversion-WebUI](https://github.com/RVC-Project/Retrieval-based-Voice-Conversion-WebUI) by RVC-Project
- [Mangio-RVC-Fork](https://github.com/Mangio621/Mangio-RVC-Fork) by Mangio621
- [Mangio-RVC-Tweaks](https://github.com/alexlnkp/Mangio-RVC-Tweaks) by alexlnkp
- [RVG_tts](https://github.com/Foxify52/RVG_tts) by Foxify52
- [RMVPE](https://github.com/Dream-High/RMVPE) by Dream-High
- [ContentVec](https://github.com/auspicious3000/contentvec/) by auspicious3000
- [HIFIGAN](https://github.com/jik876/hifi-gan) by jik876
- [Gradio](https://github.com/gradio-app/gradio) by gradio-app
- [FFmpeg](https://github.com/FFmpeg/FFmpeg) by FFmpeg
- [Ultimate Vocal Remover](https://github.com/Anjok07/ultimatevocalremovergui) by Anjok07
- [audio-slicer](https://github.com/openvpi/audio-slicer) by openvpi
- [Ilaria-Audio-Analyzer](https://github.com/TheStingerX/Ilaria-Audio-Analyzer) by Ilaria
> [!WARNING]
> If you believe you've made contributions to the code utilized in Applio and should be acknowledged in the credits, please feel free to open a pull request (PR). It's possible that we may have unintentionally overlooked your contributions, and we appreciate your proactive approach in ensuring proper recognition.
## 🙏 Thanks to all RVC, Mangio and Applio contributors
### RVC:
<a href="https://github.com/liujing04/Retrieval-based-Voice-Conversion-WebUI/graphs/contributors" target="_blank">
<img src="https://contrib.rocks/image?repo=liujing04/Retrieval-based-Voice-Conversion-WebUI" />
</a>
### Applio & Mangio:
<a href="https://github.com/IAHispano/Applio-RVC-Fork/graphs/contributors" target="_blank">
<img src="https://contrib.rocks/image?repo=IAHispano/Applio-RVC-Fork" />
</a>
| kanoyo/kaggle | [
"region:us"
] | 2023-12-21T13:25:06+00:00 | {} | 2023-12-21T13:51:39+00:00 | [] | [] | TAGS
#region-us
| # Applio-RVC-Fork
> [!NOTE]
> Applio-RVC-Fork is designed to complement existing repositories, and as such, certain features may be in experimental stages, potentially containing bugs. Additionally, there might be instances of coding practices that could be improved or refined. It is not intended to replace any other repository.
 
## Table of Contents
_This README has been enhanced by incorporating the features introduced in Applio-RVC-Fork to the original Mangio-RVC-Fork README, along with additional details and explanations._
1. Improvements of Applio Over RVC
2. Additional Features of This Repository
3. Todo Tasks
4. Installation
5. Running the Web GUI (Inference & Train)
6. Running the CLI (Inference & Train)
7. Credits
8. Thanks to all RVC, Mangio and Applio contributors
## Improvements of Applio-RVC-Fork Over RVC
_The comparisons are with respect to the original Retrieval-based-Voice-Conversion-WebUI repository._
### f0 Inference Algorithm Overhaul
- Applio features a comprehensive overhaul of the f0 inference algorithm, including:
- Addition of the pyworld dio f0 method.
- Alternative method for calculating crepe f0.
- Introduction of the torchcrepe crepe-tiny model.
- Customizable crepe_hop_length for the crepe algorithm via both the web GUI and CLI.
### f0 Crepe Pitch Extraction for Training
- Works on paperspace machines but not local MacOS/Windows machines (Potential memory leak).
### Paperspace Integration (Under maintenance, so it cannot be used for the moment.)
- Applio seamlessly integrates with Paperspace, providing the following features:
- Paperspace argument on URL (--paperspace) for sharing a Gradio link.
- A dedicated make file tailored for Paperspace users.
### Access to Tensorboard
- Applio grants easy access to Tensorboard via a Makefile and a Python script.
### CLI Functionality
- Applio introduces command-line interface (CLI) functionality, with the addition of the --cli flag in URL for CLI system usage.
### f0 Hybrid Estimation Method
- Applio offers a novel f0 hybrid estimation method by calculating nanmedian for a specified array of f0 methods, ensuring the best results from multiple methods (CLI exclusive).
- This hybrid estimation method is also available for f0 feature extraction during training.
### UI Changes
#### Inference:
- A complete interface redesign enhances user experience, with notable features such as:
- Audio recording directly from the interface.
- Convenient drop-down menus for audio and .index file selection.
- An advanced settings section with new features like autotune and formant shifting.
#### Training:
- Improved training features include:
- A total epoch slider now limited to 10,000.
- Increased save frequency limit to 100.
- Default recommended options for smoother setup.
- Better adaptation to high-resolution screens.
- A drop-down menu for dataset selection.
- Enhanced saving system options, including Save all files, Save G and D files, and Save model for inference.
#### UVR:
- Applio ensures compatibility with all VR/MDX models for an extended range of possibilities.
#### TTS (Text-to-Speech, New):
- Introducing a new Text-to-Speech (TTS) feature using RVC models.
- Support for multiple languages and Edge-tts/Google-tts.
#### Resources (New):
- Users can now upload models, backups, datasets, and audios from various storage services like Drive, Huggingface, Discord, and more.
- Download audios from YouTube with the ability to automatically separate instrumental and vocals, offering advanced options and UVR support.
#### Extra (New):
- Combine instrumental and vocals with ease, including independent volume control for each track and the option to add effects like reverb, compressor, and noise gate.
- Significant improvements in the processing interface, allowing tasks such as merging models, modifying information, obtaining information, or extracting models effortlessly.
## ️ Additional Features of This Repository
In addition to the aforementioned improvements, this repository offers the following features:
### Enhanced Tone Leakage Reduction
- Implements tone leakage reduction by replacing source features with training-set features using top1 retrieval. This helps in achieving cleaner audio results.
### Efficient Training
- Provides a seamless and speedy training experience, even on relatively modest graphics cards. The system is optimized for efficient resource utilization.
### Data Efficiency
- Supports training with a small dataset, yielding commendable results, especially with audio clips of at least 10 minutes of low-noise speech.
### Overtraining Detection
- This feature keeps track of the current progress trend and stops the training if no improvement is found after 100 epochs.
- During the 100 epochs with no improvement, no progress is saved. This allows you to continue training from the best-found epoch.
- A '.pth' file of the best epoch is saved in the logs folder under 'name_[epoch].pth', and in the weights folder as 'name_fittest.pth'. These files are the same.
### Mode Collapse Detection
- This feature restarts training before a mode collapse by lowering the batch size until it can progress past the mode collapse.
- If a mode collapse is overcome but another one occurs later, it will reset the batch size to its initial setting. This helps maintain training speed when dealing with multiple collapses.
## Todo Tasks
- [ ] Investigate GPU Detection Issue: Address the GPU detection problem and ensure proper utilization of Nvidia GPU.
- [ ] Fix Mode Collapse Prevention Feature: Refine the mode collapse prevention feature to maintain graph consistency during retraining.
- [ ] Resolve CUDA Compatibility Issue: Investigate and resolve the cuFFT error related to CUDA compatibility.
- [ ] Refactor URL: Organize the code of URL into different files for each tab, enhancing modularity.
- [ ] Expand UVR Model Options: Integrate additional UVR models to provide users with more options and flexibility.
- [ ] Enhance Installation Process: Improve the system installation process for better user experience and clarity.
- [ ] Implement Automatic Updates: Add automatic update functionality to keep the application current with the latest features.
- [ ] Multilingual Support: Include more translations for various languages.
- [ ] Diversify TTS Methods: Introduce new TTS methods and enhance customization options for a richer user experience.
- [ ] CLI Improvement: Enhance the CLI functionality and introduce a pipeline for a more streamlined user experience.
- [ ] Dependency Updates: Keep dependencies up-to-date by regularly updating to the latest versions.
- [ ] Dataset Creation Assistant: Develop an assistant for creating datasets to simplify and guide users through the process.
## Installation
### Automatic installation (Windows):
To quickly and effortlessly install Applio along with all the necessary models and configurations on Windows, you can use the install_Applio.bat script available in the releases section.
### Manual installation (Windows/MacOS):
Note for MacOS Users: When using 'faiss 1.7.2' under MacOS, you may encounter a Segmentation Fault: 11 error. To resolve this issue, install 'faiss-cpu 1.7.0' using the following command if you're installing it manually with pip:
Additionally, you can install Swig on MacOS using brew:
Install requirements:
_Before this install ffmpeg, wget, git and python (This fork just works with 3.9.X on Linux)_
### Manual installation (Paperspace):
## Running the Web GUI (Inference & Train)
_Use --paperspace or --colab if on cloud system._
## Running the CLI (Inference & Train)
Typing 'go infer' for example will take you to the infer page where you can then enter in your arguments that you wish to use for that specific page. For example typing 'go infer' will take you here:
## Credits
Applio owes its existence to the collaborative efforts of various repositories, including Mangio-RVC-Fork, and all the other credited contributors. Without their contributions, Applio would not have been possible. Therefore, we kindly request that if you appreciate the work we've accomplished, you consider exploring the projects mentioned in our credits.
Our goal is not to supplant RVC or Mangio; rather, we aim to provide a contemporary and up-to-date alternative for the entire community.
- VITS by jaywalnut310
- Retrieval-based-Voice-Conversion-WebUI by RVC-Project
- Mangio-RVC-Fork by Mangio621
- Mangio-RVC-Tweaks by alexlnkp
- RVG_tts by Foxify52
- RMVPE by Dream-High
- ContentVec by auspicious3000
- HIFIGAN by jik876
- Gradio by gradio-app
- FFmpeg by FFmpeg
- Ultimate Vocal Remover by Anjok07
- audio-slicer by openvpi
- Ilaria-Audio-Analyzer by Ilaria
> [!WARNING]
> If you believe you've made contributions to the code utilized in Applio and should be acknowledged in the credits, please feel free to open a pull request (PR). It's possible that we may have unintentionally overlooked your contributions, and we appreciate your proactive approach in ensuring proper recognition.
## Thanks to all RVC, Mangio and Applio contributors
### RVC:
<a href="URL target="_blank">
<img src="URL />
</a>
### Applio & Mangio:
<a href="URL target="_blank">
<img src="URL />
</a>
| [
"# Applio-RVC-Fork\n\n> [!NOTE]\n> Applio-RVC-Fork is designed to complement existing repositories, and as such, certain features may be in experimental stages, potentially containing bugs. Additionally, there might be instances of coding practices that could be improved or refined. It is not intended to replace any other repository.\n\n ",
"## Table of Contents\n\n_This README has been enhanced by incorporating the features introduced in Applio-RVC-Fork to the original Mangio-RVC-Fork README, along with additional details and explanations._\n\n1. Improvements of Applio Over RVC\n2. Additional Features of This Repository\n3. Todo Tasks\n4. Installation\n5. Running the Web GUI (Inference & Train)\n6. Running the CLI (Inference & Train)\n7. Credits\n8. Thanks to all RVC, Mangio and Applio contributors",
"## Improvements of Applio-RVC-Fork Over RVC\n\n_The comparisons are with respect to the original Retrieval-based-Voice-Conversion-WebUI repository._",
"### f0 Inference Algorithm Overhaul\n\n- Applio features a comprehensive overhaul of the f0 inference algorithm, including:\n - Addition of the pyworld dio f0 method.\n - Alternative method for calculating crepe f0.\n - Introduction of the torchcrepe crepe-tiny model.\n - Customizable crepe_hop_length for the crepe algorithm via both the web GUI and CLI.",
"### f0 Crepe Pitch Extraction for Training\n\n- Works on paperspace machines but not local MacOS/Windows machines (Potential memory leak).",
"### Paperspace Integration (Under maintenance, so it cannot be used for the moment.)\n\n- Applio seamlessly integrates with Paperspace, providing the following features:\n - Paperspace argument on URL (--paperspace) for sharing a Gradio link.\n - A dedicated make file tailored for Paperspace users.",
"### Access to Tensorboard\n\n- Applio grants easy access to Tensorboard via a Makefile and a Python script.",
"### CLI Functionality\n\n- Applio introduces command-line interface (CLI) functionality, with the addition of the --cli flag in URL for CLI system usage.",
"### f0 Hybrid Estimation Method\n\n- Applio offers a novel f0 hybrid estimation method by calculating nanmedian for a specified array of f0 methods, ensuring the best results from multiple methods (CLI exclusive).\n- This hybrid estimation method is also available for f0 feature extraction during training.",
"### UI Changes",
"#### Inference:\n\n- A complete interface redesign enhances user experience, with notable features such as:\n - Audio recording directly from the interface.\n - Convenient drop-down menus for audio and .index file selection.\n - An advanced settings section with new features like autotune and formant shifting.",
"#### Training:\n\n- Improved training features include:\n - A total epoch slider now limited to 10,000.\n - Increased save frequency limit to 100.\n - Default recommended options for smoother setup.\n - Better adaptation to high-resolution screens.\n - A drop-down menu for dataset selection.\n - Enhanced saving system options, including Save all files, Save G and D files, and Save model for inference.",
"#### UVR:\n\n- Applio ensures compatibility with all VR/MDX models for an extended range of possibilities.",
"#### TTS (Text-to-Speech, New):\n\n- Introducing a new Text-to-Speech (TTS) feature using RVC models.\n- Support for multiple languages and Edge-tts/Google-tts.",
"#### Resources (New):\n\n- Users can now upload models, backups, datasets, and audios from various storage services like Drive, Huggingface, Discord, and more.\n- Download audios from YouTube with the ability to automatically separate instrumental and vocals, offering advanced options and UVR support.",
"#### Extra (New):\n\n- Combine instrumental and vocals with ease, including independent volume control for each track and the option to add effects like reverb, compressor, and noise gate.\n- Significant improvements in the processing interface, allowing tasks such as merging models, modifying information, obtaining information, or extracting models effortlessly.",
"## ️ Additional Features of This Repository\n\nIn addition to the aforementioned improvements, this repository offers the following features:",
"### Enhanced Tone Leakage Reduction\n\n- Implements tone leakage reduction by replacing source features with training-set features using top1 retrieval. This helps in achieving cleaner audio results.",
"### Efficient Training\n\n- Provides a seamless and speedy training experience, even on relatively modest graphics cards. The system is optimized for efficient resource utilization.",
"### Data Efficiency\n\n- Supports training with a small dataset, yielding commendable results, especially with audio clips of at least 10 minutes of low-noise speech.",
"### Overtraining Detection\n\n- This feature keeps track of the current progress trend and stops the training if no improvement is found after 100 epochs.\n - During the 100 epochs with no improvement, no progress is saved. This allows you to continue training from the best-found epoch.\n - A '.pth' file of the best epoch is saved in the logs folder under 'name_[epoch].pth', and in the weights folder as 'name_fittest.pth'. These files are the same.",
"### Mode Collapse Detection\n\n- This feature restarts training before a mode collapse by lowering the batch size until it can progress past the mode collapse.\n - If a mode collapse is overcome but another one occurs later, it will reset the batch size to its initial setting. This helps maintain training speed when dealing with multiple collapses.",
"## Todo Tasks\n\n- [ ] Investigate GPU Detection Issue: Address the GPU detection problem and ensure proper utilization of Nvidia GPU.\n- [ ] Fix Mode Collapse Prevention Feature: Refine the mode collapse prevention feature to maintain graph consistency during retraining.\n- [ ] Resolve CUDA Compatibility Issue: Investigate and resolve the cuFFT error related to CUDA compatibility.\n- [ ] Refactor URL: Organize the code of URL into different files for each tab, enhancing modularity.\n- [ ] Expand UVR Model Options: Integrate additional UVR models to provide users with more options and flexibility.\n- [ ] Enhance Installation Process: Improve the system installation process for better user experience and clarity.\n- [ ] Implement Automatic Updates: Add automatic update functionality to keep the application current with the latest features.\n- [ ] Multilingual Support: Include more translations for various languages.\n- [ ] Diversify TTS Methods: Introduce new TTS methods and enhance customization options for a richer user experience.\n- [ ] CLI Improvement: Enhance the CLI functionality and introduce a pipeline for a more streamlined user experience.\n- [ ] Dependency Updates: Keep dependencies up-to-date by regularly updating to the latest versions.\n- [ ] Dataset Creation Assistant: Develop an assistant for creating datasets to simplify and guide users through the process.",
"## Installation",
"### Automatic installation (Windows):\n\nTo quickly and effortlessly install Applio along with all the necessary models and configurations on Windows, you can use the install_Applio.bat script available in the releases section.",
"### Manual installation (Windows/MacOS):\n\nNote for MacOS Users: When using 'faiss 1.7.2' under MacOS, you may encounter a Segmentation Fault: 11 error. To resolve this issue, install 'faiss-cpu 1.7.0' using the following command if you're installing it manually with pip:\n\n\n\nAdditionally, you can install Swig on MacOS using brew:\n\n\n\nInstall requirements:\n_Before this install ffmpeg, wget, git and python (This fork just works with 3.9.X on Linux)_",
"### Manual installation (Paperspace):",
"## Running the Web GUI (Inference & Train)\n\n_Use --paperspace or --colab if on cloud system._",
"## Running the CLI (Inference & Train)\n\n\n\n\n\nTyping 'go infer' for example will take you to the infer page where you can then enter in your arguments that you wish to use for that specific page. For example typing 'go infer' will take you here:",
"## Credits\n\nApplio owes its existence to the collaborative efforts of various repositories, including Mangio-RVC-Fork, and all the other credited contributors. Without their contributions, Applio would not have been possible. Therefore, we kindly request that if you appreciate the work we've accomplished, you consider exploring the projects mentioned in our credits.\n\nOur goal is not to supplant RVC or Mangio; rather, we aim to provide a contemporary and up-to-date alternative for the entire community.\n\n- VITS by jaywalnut310\n- Retrieval-based-Voice-Conversion-WebUI by RVC-Project\n- Mangio-RVC-Fork by Mangio621\n- Mangio-RVC-Tweaks by alexlnkp\n- RVG_tts by Foxify52\n- RMVPE by Dream-High\n- ContentVec by auspicious3000\n- HIFIGAN by jik876\n- Gradio by gradio-app\n- FFmpeg by FFmpeg\n- Ultimate Vocal Remover by Anjok07\n- audio-slicer by openvpi\n- Ilaria-Audio-Analyzer by Ilaria\n\n> [!WARNING] \n> If you believe you've made contributions to the code utilized in Applio and should be acknowledged in the credits, please feel free to open a pull request (PR). It's possible that we may have unintentionally overlooked your contributions, and we appreciate your proactive approach in ensuring proper recognition.",
"## Thanks to all RVC, Mangio and Applio contributors",
"### RVC:\n\n<a href=\"URL target=\"_blank\">\n <img src=\"URL />\n</a>",
"### Applio & Mangio:\n\n<a href=\"URL target=\"_blank\">\n <img src=\"URL />\n</a>"
] | [
"TAGS\n#region-us \n",
"# Applio-RVC-Fork\n\n> [!NOTE]\n> Applio-RVC-Fork is designed to complement existing repositories, and as such, certain features may be in experimental stages, potentially containing bugs. Additionally, there might be instances of coding practices that could be improved or refined. It is not intended to replace any other repository.\n\n ",
"## Table of Contents\n\n_This README has been enhanced by incorporating the features introduced in Applio-RVC-Fork to the original Mangio-RVC-Fork README, along with additional details and explanations._\n\n1. Improvements of Applio Over RVC\n2. Additional Features of This Repository\n3. Todo Tasks\n4. Installation\n5. Running the Web GUI (Inference & Train)\n6. Running the CLI (Inference & Train)\n7. Credits\n8. Thanks to all RVC, Mangio and Applio contributors",
"## Improvements of Applio-RVC-Fork Over RVC\n\n_The comparisons are with respect to the original Retrieval-based-Voice-Conversion-WebUI repository._",
"### f0 Inference Algorithm Overhaul\n\n- Applio features a comprehensive overhaul of the f0 inference algorithm, including:\n - Addition of the pyworld dio f0 method.\n - Alternative method for calculating crepe f0.\n - Introduction of the torchcrepe crepe-tiny model.\n - Customizable crepe_hop_length for the crepe algorithm via both the web GUI and CLI.",
"### f0 Crepe Pitch Extraction for Training\n\n- Works on paperspace machines but not local MacOS/Windows machines (Potential memory leak).",
"### Paperspace Integration (Under maintenance, so it cannot be used for the moment.)\n\n- Applio seamlessly integrates with Paperspace, providing the following features:\n - Paperspace argument on URL (--paperspace) for sharing a Gradio link.\n - A dedicated make file tailored for Paperspace users.",
"### Access to Tensorboard\n\n- Applio grants easy access to Tensorboard via a Makefile and a Python script.",
"### CLI Functionality\n\n- Applio introduces command-line interface (CLI) functionality, with the addition of the --cli flag in URL for CLI system usage.",
"### f0 Hybrid Estimation Method\n\n- Applio offers a novel f0 hybrid estimation method by calculating nanmedian for a specified array of f0 methods, ensuring the best results from multiple methods (CLI exclusive).\n- This hybrid estimation method is also available for f0 feature extraction during training.",
"### UI Changes",
"#### Inference:\n\n- A complete interface redesign enhances user experience, with notable features such as:\n - Audio recording directly from the interface.\n - Convenient drop-down menus for audio and .index file selection.\n - An advanced settings section with new features like autotune and formant shifting.",
"#### Training:\n\n- Improved training features include:\n - A total epoch slider now limited to 10,000.\n - Increased save frequency limit to 100.\n - Default recommended options for smoother setup.\n - Better adaptation to high-resolution screens.\n - A drop-down menu for dataset selection.\n - Enhanced saving system options, including Save all files, Save G and D files, and Save model for inference.",
"#### UVR:\n\n- Applio ensures compatibility with all VR/MDX models for an extended range of possibilities.",
"#### TTS (Text-to-Speech, New):\n\n- Introducing a new Text-to-Speech (TTS) feature using RVC models.\n- Support for multiple languages and Edge-tts/Google-tts.",
"#### Resources (New):\n\n- Users can now upload models, backups, datasets, and audios from various storage services like Drive, Huggingface, Discord, and more.\n- Download audios from YouTube with the ability to automatically separate instrumental and vocals, offering advanced options and UVR support.",
"#### Extra (New):\n\n- Combine instrumental and vocals with ease, including independent volume control for each track and the option to add effects like reverb, compressor, and noise gate.\n- Significant improvements in the processing interface, allowing tasks such as merging models, modifying information, obtaining information, or extracting models effortlessly.",
"## ️ Additional Features of This Repository\n\nIn addition to the aforementioned improvements, this repository offers the following features:",
"### Enhanced Tone Leakage Reduction\n\n- Implements tone leakage reduction by replacing source features with training-set features using top1 retrieval. This helps in achieving cleaner audio results.",
"### Efficient Training\n\n- Provides a seamless and speedy training experience, even on relatively modest graphics cards. The system is optimized for efficient resource utilization.",
"### Data Efficiency\n\n- Supports training with a small dataset, yielding commendable results, especially with audio clips of at least 10 minutes of low-noise speech.",
"### Overtraining Detection\n\n- This feature keeps track of the current progress trend and stops the training if no improvement is found after 100 epochs.\n - During the 100 epochs with no improvement, no progress is saved. This allows you to continue training from the best-found epoch.\n - A '.pth' file of the best epoch is saved in the logs folder under 'name_[epoch].pth', and in the weights folder as 'name_fittest.pth'. These files are the same.",
"### Mode Collapse Detection\n\n- This feature restarts training before a mode collapse by lowering the batch size until it can progress past the mode collapse.\n - If a mode collapse is overcome but another one occurs later, it will reset the batch size to its initial setting. This helps maintain training speed when dealing with multiple collapses.",
"## Todo Tasks\n\n- [ ] Investigate GPU Detection Issue: Address the GPU detection problem and ensure proper utilization of Nvidia GPU.\n- [ ] Fix Mode Collapse Prevention Feature: Refine the mode collapse prevention feature to maintain graph consistency during retraining.\n- [ ] Resolve CUDA Compatibility Issue: Investigate and resolve the cuFFT error related to CUDA compatibility.\n- [ ] Refactor URL: Organize the code of URL into different files for each tab, enhancing modularity.\n- [ ] Expand UVR Model Options: Integrate additional UVR models to provide users with more options and flexibility.\n- [ ] Enhance Installation Process: Improve the system installation process for better user experience and clarity.\n- [ ] Implement Automatic Updates: Add automatic update functionality to keep the application current with the latest features.\n- [ ] Multilingual Support: Include more translations for various languages.\n- [ ] Diversify TTS Methods: Introduce new TTS methods and enhance customization options for a richer user experience.\n- [ ] CLI Improvement: Enhance the CLI functionality and introduce a pipeline for a more streamlined user experience.\n- [ ] Dependency Updates: Keep dependencies up-to-date by regularly updating to the latest versions.\n- [ ] Dataset Creation Assistant: Develop an assistant for creating datasets to simplify and guide users through the process.",
"## Installation",
"### Automatic installation (Windows):\n\nTo quickly and effortlessly install Applio along with all the necessary models and configurations on Windows, you can use the install_Applio.bat script available in the releases section.",
"### Manual installation (Windows/MacOS):\n\nNote for MacOS Users: When using 'faiss 1.7.2' under MacOS, you may encounter a Segmentation Fault: 11 error. To resolve this issue, install 'faiss-cpu 1.7.0' using the following command if you're installing it manually with pip:\n\n\n\nAdditionally, you can install Swig on MacOS using brew:\n\n\n\nInstall requirements:\n_Before this install ffmpeg, wget, git and python (This fork just works with 3.9.X on Linux)_",
"### Manual installation (Paperspace):",
"## Running the Web GUI (Inference & Train)\n\n_Use --paperspace or --colab if on cloud system._",
"## Running the CLI (Inference & Train)\n\n\n\n\n\nTyping 'go infer' for example will take you to the infer page where you can then enter in your arguments that you wish to use for that specific page. For example typing 'go infer' will take you here:",
"## Credits\n\nApplio owes its existence to the collaborative efforts of various repositories, including Mangio-RVC-Fork, and all the other credited contributors. Without their contributions, Applio would not have been possible. Therefore, we kindly request that if you appreciate the work we've accomplished, you consider exploring the projects mentioned in our credits.\n\nOur goal is not to supplant RVC or Mangio; rather, we aim to provide a contemporary and up-to-date alternative for the entire community.\n\n- VITS by jaywalnut310\n- Retrieval-based-Voice-Conversion-WebUI by RVC-Project\n- Mangio-RVC-Fork by Mangio621\n- Mangio-RVC-Tweaks by alexlnkp\n- RVG_tts by Foxify52\n- RMVPE by Dream-High\n- ContentVec by auspicious3000\n- HIFIGAN by jik876\n- Gradio by gradio-app\n- FFmpeg by FFmpeg\n- Ultimate Vocal Remover by Anjok07\n- audio-slicer by openvpi\n- Ilaria-Audio-Analyzer by Ilaria\n\n> [!WARNING] \n> If you believe you've made contributions to the code utilized in Applio and should be acknowledged in the credits, please feel free to open a pull request (PR). It's possible that we may have unintentionally overlooked your contributions, and we appreciate your proactive approach in ensuring proper recognition.",
"## Thanks to all RVC, Mangio and Applio contributors",
"### RVC:\n\n<a href=\"URL target=\"_blank\">\n <img src=\"URL />\n</a>",
"### Applio & Mangio:\n\n<a href=\"URL target=\"_blank\">\n <img src=\"URL />\n</a>"
] | [
6,
114,
122,
45,
96,
35,
70,
27,
40,
70,
5,
66,
98,
28,
54,
67,
81,
34,
50,
40,
42,
125,
76,
319,
2,
47,
126,
8,
29,
60,
342,
14,
28,
31
] | [
"passage: TAGS\n#region-us \n# Applio-RVC-Fork\n\n> [!NOTE]\n> Applio-RVC-Fork is designed to complement existing repositories, and as such, certain features may be in experimental stages, potentially containing bugs. Additionally, there might be instances of coding practices that could be improved or refined. It is not intended to replace any other repository.\n\n ## Table of Contents\n\n_This README has been enhanced by incorporating the features introduced in Applio-RVC-Fork to the original Mangio-RVC-Fork README, along with additional details and explanations._\n\n1. Improvements of Applio Over RVC\n2. Additional Features of This Repository\n3. Todo Tasks\n4. Installation\n5. Running the Web GUI (Inference & Train)\n6. Running the CLI (Inference & Train)\n7. Credits\n8. Thanks to all RVC, Mangio and Applio contributors## Improvements of Applio-RVC-Fork Over RVC\n\n_The comparisons are with respect to the original Retrieval-based-Voice-Conversion-WebUI repository._### f0 Inference Algorithm Overhaul\n\n- Applio features a comprehensive overhaul of the f0 inference algorithm, including:\n - Addition of the pyworld dio f0 method.\n - Alternative method for calculating crepe f0.\n - Introduction of the torchcrepe crepe-tiny model.\n - Customizable crepe_hop_length for the crepe algorithm via both the web GUI and CLI.### f0 Crepe Pitch Extraction for Training\n\n- Works on paperspace machines but not local MacOS/Windows machines (Potential memory leak).### Paperspace Integration (Under maintenance, so it cannot be used for the moment.)\n\n- Applio seamlessly integrates with Paperspace, providing the following features:\n - Paperspace argument on URL (--paperspace) for sharing a Gradio link.\n - A dedicated make file tailored for Paperspace users.",
"passage: ### Access to Tensorboard\n\n- Applio grants easy access to Tensorboard via a Makefile and a Python script.### CLI Functionality\n\n- Applio introduces command-line interface (CLI) functionality, with the addition of the --cli flag in URL for CLI system usage.### f0 Hybrid Estimation Method\n\n- Applio offers a novel f0 hybrid estimation method by calculating nanmedian for a specified array of f0 methods, ensuring the best results from multiple methods (CLI exclusive).\n- This hybrid estimation method is also available for f0 feature extraction during training.### UI Changes#### Inference:\n\n- A complete interface redesign enhances user experience, with notable features such as:\n - Audio recording directly from the interface.\n - Convenient drop-down menus for audio and .index file selection.\n - An advanced settings section with new features like autotune and formant shifting.#### Training:\n\n- Improved training features include:\n - A total epoch slider now limited to 10,000.\n - Increased save frequency limit to 100.\n - Default recommended options for smoother setup.\n - Better adaptation to high-resolution screens.\n - A drop-down menu for dataset selection.\n - Enhanced saving system options, including Save all files, Save G and D files, and Save model for inference.#### UVR:\n\n- Applio ensures compatibility with all VR/MDX models for an extended range of possibilities.#### TTS (Text-to-Speech, New):\n\n- Introducing a new Text-to-Speech (TTS) feature using RVC models.\n- Support for multiple languages and Edge-tts/Google-tts.#### Resources (New):\n\n- Users can now upload models, backups, datasets, and audios from various storage services like Drive, Huggingface, Discord, and more.\n- Download audios from YouTube with the ability to automatically separate instrumental and vocals, offering advanced options and UVR support.",
"passage: #### Extra (New):\n\n- Combine instrumental and vocals with ease, including independent volume control for each track and the option to add effects like reverb, compressor, and noise gate.\n- Significant improvements in the processing interface, allowing tasks such as merging models, modifying information, obtaining information, or extracting models effortlessly.## ️ Additional Features of This Repository\n\nIn addition to the aforementioned improvements, this repository offers the following features:### Enhanced Tone Leakage Reduction\n\n- Implements tone leakage reduction by replacing source features with training-set features using top1 retrieval. This helps in achieving cleaner audio results.### Efficient Training\n\n- Provides a seamless and speedy training experience, even on relatively modest graphics cards. The system is optimized for efficient resource utilization.### Data Efficiency\n\n- Supports training with a small dataset, yielding commendable results, especially with audio clips of at least 10 minutes of low-noise speech.### Overtraining Detection\n\n- This feature keeps track of the current progress trend and stops the training if no improvement is found after 100 epochs.\n - During the 100 epochs with no improvement, no progress is saved. This allows you to continue training from the best-found epoch.\n - A '.pth' file of the best epoch is saved in the logs folder under 'name_[epoch].pth', and in the weights folder as 'name_fittest.pth'. These files are the same.### Mode Collapse Detection\n\n- This feature restarts training before a mode collapse by lowering the batch size until it can progress past the mode collapse.\n - If a mode collapse is overcome but another one occurs later, it will reset the batch size to its initial setting. This helps maintain training speed when dealing with multiple collapses.",
"passage: ## Todo Tasks\n\n- [ ] Investigate GPU Detection Issue: Address the GPU detection problem and ensure proper utilization of Nvidia GPU.\n- [ ] Fix Mode Collapse Prevention Feature: Refine the mode collapse prevention feature to maintain graph consistency during retraining.\n- [ ] Resolve CUDA Compatibility Issue: Investigate and resolve the cuFFT error related to CUDA compatibility.\n- [ ] Refactor URL: Organize the code of URL into different files for each tab, enhancing modularity.\n- [ ] Expand UVR Model Options: Integrate additional UVR models to provide users with more options and flexibility.\n- [ ] Enhance Installation Process: Improve the system installation process for better user experience and clarity.\n- [ ] Implement Automatic Updates: Add automatic update functionality to keep the application current with the latest features.\n- [ ] Multilingual Support: Include more translations for various languages.\n- [ ] Diversify TTS Methods: Introduce new TTS methods and enhance customization options for a richer user experience.\n- [ ] CLI Improvement: Enhance the CLI functionality and introduce a pipeline for a more streamlined user experience.\n- [ ] Dependency Updates: Keep dependencies up-to-date by regularly updating to the latest versions.\n- [ ] Dataset Creation Assistant: Develop an assistant for creating datasets to simplify and guide users through the process.## Installation### Automatic installation (Windows):\n\nTo quickly and effortlessly install Applio along with all the necessary models and configurations on Windows, you can use the install_Applio.bat script available in the releases section.### Manual installation (Windows/MacOS):\n\nNote for MacOS Users: When using 'faiss 1.7.2' under MacOS, you may encounter a Segmentation Fault: 11 error. To resolve this issue, install 'faiss-cpu 1.7.0' using the following command if you're installing it manually with pip:\n\n\n\nAdditionally, you can install Swig on MacOS using brew:\n\n\n\nInstall requirements:\n_Before this install ffmpeg, wget, git and python (This fork just works with 3.9.X on Linux)_### Manual installation (Paperspace):## Running the Web GUI (Inference & Train)\n\n_Use --paperspace or --colab if on cloud system._## Running the CLI (Inference & Train)\n\n\n\n\n\nTyping 'go infer' for example will take you to the infer page where you can then enter in your arguments that you wish to use for that specific page. For example typing 'go infer' will take you here:"
] |
1b911e3b17681610677acae17bfa8f0dc812243f | # AmazonBeauty_m1
+ **Data format:**
user_id item1 item2 ...
+ **Download:** https://huggingface.co/datasets/reczoo/AmazonBeauty_m1/tree/main
+ **Repository:** https://github.com/reczoo/Datasets
+ **Used by papers:**
- Jianing Sun, Wei Guo, Dengcheng Zhang, Yingxue Zhang, Florence Regol, Yaochen Hu, Huifeng Guo, Ruiming Tang, Han Yuan, Xiuqiang He, Mark Coates. [A Framework for Recommending Accurate and Diverse Items Using Bayesian Graph Convolutional Neural Networks](https://hyclex.github.io/papers/paper_sun2019BGCN.pdf). In KDD 2020.
- Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, Xiuqiang He. [SimpleX: A Simple and Strong Baseline for Collaborative Filterin](https://arxiv.org/abs/2109.12613). In CIKM 2021.
+ **Check the md5sum for data integrity:**
```bash
$ md5sum *.txt
66fb687136d55b51742905ece189da31 test.txt
53cc9d39bc79f13c9bd3e75bd5121d1d train.txt
```
| reczoo/AmazonBeauty_m1 | [
"arxiv:2109.12613",
"region:us"
] | 2023-12-21T13:29:06+00:00 | {} | 2023-12-21T15:18:49+00:00 | [
"2109.12613"
] | [] | TAGS
#arxiv-2109.12613 #region-us
| # AmazonBeauty_m1
+ Data format:
user_id item1 item2 ...
+ Download: URL
+ Repository: URL
+ Used by papers:
- Jianing Sun, Wei Guo, Dengcheng Zhang, Yingxue Zhang, Florence Regol, Yaochen Hu, Huifeng Guo, Ruiming Tang, Han Yuan, Xiuqiang He, Mark Coates. A Framework for Recommending Accurate and Diverse Items Using Bayesian Graph Convolutional Neural Networks. In KDD 2020.
- Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, Xiuqiang He. SimpleX: A Simple and Strong Baseline for Collaborative Filterin. In CIKM 2021.
+ Check the md5sum for data integrity:
| [
"# AmazonBeauty_m1\n\n+ Data format: \nuser_id item1 item2 ...\n\n+ Download: URL\n+ Repository: URL\n\n+ Used by papers: \n - Jianing Sun, Wei Guo, Dengcheng Zhang, Yingxue Zhang, Florence Regol, Yaochen Hu, Huifeng Guo, Ruiming Tang, Han Yuan, Xiuqiang He, Mark Coates. A Framework for Recommending Accurate and Diverse Items Using Bayesian Graph Convolutional Neural Networks. In KDD 2020.\n - Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, Xiuqiang He. SimpleX: A Simple and Strong Baseline for Collaborative Filterin. In CIKM 2021.\n\n+ Check the md5sum for data integrity:"
] | [
"TAGS\n#arxiv-2109.12613 #region-us \n",
"# AmazonBeauty_m1\n\n+ Data format: \nuser_id item1 item2 ...\n\n+ Download: URL\n+ Repository: URL\n\n+ Used by papers: \n - Jianing Sun, Wei Guo, Dengcheng Zhang, Yingxue Zhang, Florence Regol, Yaochen Hu, Huifeng Guo, Ruiming Tang, Han Yuan, Xiuqiang He, Mark Coates. A Framework for Recommending Accurate and Diverse Items Using Bayesian Graph Convolutional Neural Networks. In KDD 2020.\n - Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, Xiuqiang He. SimpleX: A Simple and Strong Baseline for Collaborative Filterin. In CIKM 2021.\n\n+ Check the md5sum for data integrity:"
] | [
14,
191
] | [
"passage: TAGS\n#arxiv-2109.12613 #region-us \n# AmazonBeauty_m1\n\n+ Data format: \nuser_id item1 item2 ...\n\n+ Download: URL\n+ Repository: URL\n\n+ Used by papers: \n - Jianing Sun, Wei Guo, Dengcheng Zhang, Yingxue Zhang, Florence Regol, Yaochen Hu, Huifeng Guo, Ruiming Tang, Han Yuan, Xiuqiang He, Mark Coates. A Framework for Recommending Accurate and Diverse Items Using Bayesian Graph Convolutional Neural Networks. In KDD 2020.\n - Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, Xiuqiang He. SimpleX: A Simple and Strong Baseline for Collaborative Filterin. In CIKM 2021.\n\n+ Check the md5sum for data integrity:"
] |
b2e7e7e499c6d0df832d99e1dc7be3d34a35612c | This dataset contains 2000 samples for dysarthric males, dysarthric females, non-dysarthric males, and non-dysarthric females.
Originally TORGO database contains 18GB of data, to download and for more information on data, please refer to the following link,
http://www.cs.toronto.edu/~complingweb/data/TORGO/torgo.html
This database should be used only for academic purposes.
Database / Licence Reference:
Rudzicz, F., Namasivayam, A.K., Wolff, T. (2012) The TORGO database of acoustic and articulatory speech from speakers with dysarthria. Language Resources and Evaluation, 46(4), pages 523--541.
Data Information:
It contains four folders with descriptions below,
dysarthria_female: 500 samples of dysarthric female audio recorded on different sessions.
dysarthria_male: 500 samples of dysarthric male audio recorded on different sessions.
non _dysarthria _female: 500 samples of non-dysarthric female audio recorded on different sessions.
non _dysarthria _male: 500 samples of non-dysarthric male audio recorded on different sessions.
data.csv
filename: audio file path
is_dysarthria: non-dysarthria or dysarthria
gender: male or female
Application of the data,
Applying deep learning technology to classify dysarthria and non-dysarthria patients
References:
Dumane, P., Hungund, B., Chavan, S. (2021). Dysarthria Detection Using Convolutional Neural Network. In: Pawar, P.M., Balasubramaniam, R., Ronge, B.P., Salunkhe, S.B., Vibhute, A.S., Melinamath, B. (eds) Techno-Societal 2020. Springer, Cham. https://doi.org/10.1007/978-3-030-69921-5_45 | birgermoell/dysarthria | [
"region:us"
] | 2023-12-21T13:42:00+00:00 | {} | 2023-12-21T13:45:53+00:00 | [] | [] | TAGS
#region-us
| This dataset contains 2000 samples for dysarthric males, dysarthric females, non-dysarthric males, and non-dysarthric females.
Originally TORGO database contains 18GB of data, to download and for more information on data, please refer to the following link,
URL
This database should be used only for academic purposes.
Database / Licence Reference:
Rudzicz, F., Namasivayam, A.K., Wolff, T. (2012) The TORGO database of acoustic and articulatory speech from speakers with dysarthria. Language Resources and Evaluation, 46(4), pages 523--541.
Data Information:
It contains four folders with descriptions below,
dysarthria_female: 500 samples of dysarthric female audio recorded on different sessions.
dysarthria_male: 500 samples of dysarthric male audio recorded on different sessions.
non _dysarthria _female: 500 samples of non-dysarthric female audio recorded on different sessions.
non _dysarthria _male: 500 samples of non-dysarthric male audio recorded on different sessions.
URL
filename: audio file path
is_dysarthria: non-dysarthria or dysarthria
gender: male or female
Application of the data,
Applying deep learning technology to classify dysarthria and non-dysarthria patients
References:
Dumane, P., Hungund, B., Chavan, S. (2021). Dysarthria Detection Using Convolutional Neural Network. In: Pawar, P.M., Balasubramaniam, R., Ronge, B.P., Salunkhe, S.B., Vibhute, A.S., Melinamath, B. (eds) Techno-Societal 2020. Springer, Cham. URL | [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
52c095d5d08b122479d5c9c0b44dbf9b1644c721 | # Dataset Card for "Leon-Chess-Dataset-19k-BOS"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Leon-LLM/Leon-Chess-Dataset-19k-BOS | [
"region:us"
] | 2023-12-21T13:46:12+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10517279, "num_examples": 19383}], "download_size": 5395613, "dataset_size": 10517279}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-21T13:46:15+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Leon-Chess-Dataset-19k-BOS"
More Information needed | [
"# Dataset Card for \"Leon-Chess-Dataset-19k-BOS\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Leon-Chess-Dataset-19k-BOS\"\n\nMore Information needed"
] | [
6,
24
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"Leon-Chess-Dataset-19k-BOS\"\n\nMore Information needed"
] |
2936ce8759d19adfd5ff72af7ee4a427a17843a4 | # Dataset Card for "Leon-Chess-Dataset-19k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Leon-LLM/Leon-Chess-Dataset-19k | [
"region:us"
] | 2023-12-21T13:46:46+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 10459130, "num_examples": 19383}], "download_size": 5391428, "dataset_size": 10459130}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-21T13:46:48+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Leon-Chess-Dataset-19k"
More Information needed | [
"# Dataset Card for \"Leon-Chess-Dataset-19k\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Leon-Chess-Dataset-19k\"\n\nMore Information needed"
] | [
6,
21
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"Leon-Chess-Dataset-19k\"\n\nMore Information needed"
] |
8fa24678df2db98baca89d5775c0b5e71b1dfde2 | # Dataset Card for "Leon-Chess-Dataset-71k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Leon-LLM/Leon-Chess-Dataset-71k | [
"region:us"
] | 2023-12-21T13:46:59+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 38778712, "num_examples": 71641}], "download_size": 19940618, "dataset_size": 38778712}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-21T13:47:03+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Leon-Chess-Dataset-71k"
More Information needed | [
"# Dataset Card for \"Leon-Chess-Dataset-71k\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Leon-Chess-Dataset-71k\"\n\nMore Information needed"
] | [
6,
22
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"Leon-Chess-Dataset-71k\"\n\nMore Information needed"
] |
fea9da478b739a38925c24ec7df72da654089126 | # Dataset Card for "Leon-Chess-Dataset-71k-BOS"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Leon-LLM/Leon-Chess-Dataset-71k-BOS | [
"region:us"
] | 2023-12-21T13:47:09+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 38993636, "num_examples": 71641}], "download_size": 19959801, "dataset_size": 38993636}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-21T13:47:11+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Leon-Chess-Dataset-71k-BOS"
More Information needed | [
"# Dataset Card for \"Leon-Chess-Dataset-71k-BOS\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Leon-Chess-Dataset-71k-BOS\"\n\nMore Information needed"
] | [
6,
25
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"Leon-Chess-Dataset-71k-BOS\"\n\nMore Information needed"
] |
06ad1a2cab7be1b4293d2de1cca7d2f4f8359b70 | # Dataset Card for "Leon-Chess-Dataset-350k-BOS"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Leon-LLM/Leon-Chess-Dataset-350k-BOS | [
"region:us"
] | 2023-12-21T13:47:18+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 185759593, "num_examples": 345351}], "download_size": 94897914, "dataset_size": 185759593}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-21T13:47:25+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Leon-Chess-Dataset-350k-BOS"
More Information needed | [
"# Dataset Card for \"Leon-Chess-Dataset-350k-BOS\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Leon-Chess-Dataset-350k-BOS\"\n\nMore Information needed"
] | [
6,
25
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"Leon-Chess-Dataset-350k-BOS\"\n\nMore Information needed"
] |
da554c33ad81c5ba54151d5670579b87b1b522c5 | # Dataset Card for "Leon-Chess-Dataset-350k"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Leon-LLM/Leon-Chess-Dataset-350k | [
"region:us"
] | 2023-12-21T13:47:28+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 184723535, "num_examples": 345351}], "download_size": 94791082, "dataset_size": 184723535}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-21T13:47:32+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Leon-Chess-Dataset-350k"
More Information needed | [
"# Dataset Card for \"Leon-Chess-Dataset-350k\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Leon-Chess-Dataset-350k\"\n\nMore Information needed"
] | [
6,
22
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"Leon-Chess-Dataset-350k\"\n\nMore Information needed"
] |
8eb7c1390cea0d48452bc841f0939c4abcee596e | # Dataset Card for "Leon-Chess-Dataset-1M"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Leon-LLM/Leon-Chess-Dataset-1M | [
"region:us"
] | 2023-12-21T13:47:54+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 551374495, "num_examples": 1028170}], "download_size": 282346024, "dataset_size": 551374495}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-21T13:48:05+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Leon-Chess-Dataset-1M"
More Information needed | [
"# Dataset Card for \"Leon-Chess-Dataset-1M\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Leon-Chess-Dataset-1M\"\n\nMore Information needed"
] | [
6,
21
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"Leon-Chess-Dataset-1M\"\n\nMore Information needed"
] |
f367aa499330074052279dff53f08f48a3233a49 | # Dataset Card for "Leon-Chess-Dataset-1M-BOS"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Leon-LLM/Leon-Chess-Dataset-1M-BOS | [
"region:us"
] | 2023-12-21T13:48:10+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 554459020, "num_examples": 1028170}], "download_size": 282676393, "dataset_size": 554459020}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-21T13:48:34+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Leon-Chess-Dataset-1M-BOS"
More Information needed | [
"# Dataset Card for \"Leon-Chess-Dataset-1M-BOS\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Leon-Chess-Dataset-1M-BOS\"\n\nMore Information needed"
] | [
6,
24
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"Leon-Chess-Dataset-1M-BOS\"\n\nMore Information needed"
] |
c4b6d6395124c85b28098a168002b928201ab358 | # Dataset Card for "Leon-Chess-Dataset-raw-tokenized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Leon-LLM/Leon-Chess-Dataset-raw-tokenized | [
"region:us"
] | 2023-12-21T13:51:41+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 13387092364, "num_examples": 23521034}], "download_size": 6772381704, "dataset_size": 13387092364}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-21T13:56:26+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Leon-Chess-Dataset-raw-tokenized"
More Information needed | [
"# Dataset Card for \"Leon-Chess-Dataset-raw-tokenized\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Leon-Chess-Dataset-raw-tokenized\"\n\nMore Information needed"
] | [
6,
25
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"Leon-Chess-Dataset-raw-tokenized\"\n\nMore Information needed"
] |
6f160c855c924dd303ac38ce24f162920e23dc72 | # Dataset Card for "alignment"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | probejie/alignment | [
"region:us"
] | 2023-12-21T13:57:07+00:00 | {"dataset_info": {"features": [{"name": "input_text", "dtype": "string"}, {"name": "best_response", "dtype": "string"}, {"name": "worse_response", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 7268, "num_examples": 10}], "download_size": 11503, "dataset_size": 7268}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-01-04T11:50:00+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "alignment"
More Information needed | [
"# Dataset Card for \"alignment\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"alignment\"\n\nMore Information needed"
] | [
6,
12
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"alignment\"\n\nMore Information needed"
] |
bc28e5d9f622ae7d04cadb1a1f9730083527912e | # Dataset Card for "Leon-Chess-Dataset-raw-xlan"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Leon-LLM/Leon-Chess-Dataset-raw-xlan | [
"region:us"
] | 2023-12-21T13:57:25+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 15609390258, "num_examples": 24543537}], "download_size": 9338596586, "dataset_size": 15609390258}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-21T14:03:29+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Leon-Chess-Dataset-raw-xlan"
More Information needed | [
"# Dataset Card for \"Leon-Chess-Dataset-raw-xlan\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Leon-Chess-Dataset-raw-xlan\"\n\nMore Information needed"
] | [
6,
24
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"Leon-Chess-Dataset-raw-xlan\"\n\nMore Information needed"
] |
68c65035478513a0f3fdd23affc426fad630de83 | # Dataset Card for "Leon-Chess-Dataset-270k-All-Moves-BOS"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Leon-LLM/Leon-Chess-Dataset-270k-All-Moves-BOS | [
"region:us"
] | 2023-12-21T13:57:46+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 85112470, "num_examples": 270052}], "download_size": 6507602, "dataset_size": 85112470}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-21T13:57:50+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Leon-Chess-Dataset-270k-All-Moves-BOS"
More Information needed | [
"# Dataset Card for \"Leon-Chess-Dataset-270k-All-Moves-BOS\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Leon-Chess-Dataset-270k-All-Moves-BOS\"\n\nMore Information needed"
] | [
6,
30
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"Leon-Chess-Dataset-270k-All-Moves-BOS\"\n\nMore Information needed"
] |
8ef41e16f1f0afa64346e676bb598be480958f96 | # Dataset Card for "Leon-Chess-Dataset-270k-All-Moves"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Leon-LLM/Leon-Chess-Dataset-270k-All-Moves | [
"region:us"
] | 2023-12-21T13:57:58+00:00 | {"dataset_info": {"features": [{"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 84302286, "num_examples": 270052}], "download_size": 6464002, "dataset_size": 84302286}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-21T13:58:01+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Leon-Chess-Dataset-270k-All-Moves"
More Information needed | [
"# Dataset Card for \"Leon-Chess-Dataset-270k-All-Moves\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Leon-Chess-Dataset-270k-All-Moves\"\n\nMore Information needed"
] | [
6,
27
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"Leon-Chess-Dataset-270k-All-Moves\"\n\nMore Information needed"
] |
69008aca172fea74b441667e759213a4c0591a4e | # AmazonBooks_m1
+ **Data format:**
user_id item1 item2 ...
+ **Download:** https://huggingface.co/datasets/reczoo/AmazonBooks_m1/tree/main
+ **Repository:** https://github.com/reczoo/Datasets
+ **Used by papers:**
- Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, Meng Wang. [LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation](https://arxiv.org/abs/2002.02126). In SIGIR 2020.
- Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, Xiuqiang He. [SimpleX: A Simple and Strong Baseline for Collaborative Filtering](https://arxiv.org/abs/2109.12613). In CIKM 2021.
- Kelong Mao, Jieming Zhu, Xi Xiao, Biao Lu, Zhaowei Wang, Xiuqiang He. [UltraGCN: Ultra Simplification of Graph Convolutional Networks for Recommendation](https://arxiv.org/abs/2110.15114). In CIKM 2021.
+ **Check the md5sum for data integrity:**
```bash
$ md5sum *.txt
5b1125ef3bf4118a7988f1fd8ce52ef9 item_list.txt
30f8ccfea18d25007ba9fb9aba4e174d test.txt
c916ecac04ca72300a016228258b41ed train.txt
132f8a5d6d35d5fdde1e0396488be235 user_list.txt
```
| reczoo/AmazonBooks_m1 | [
"arxiv:2002.02126",
"arxiv:2109.12613",
"arxiv:2110.15114",
"region:us"
] | 2023-12-21T14:03:50+00:00 | {} | 2023-12-21T15:19:22+00:00 | [
"2002.02126",
"2109.12613",
"2110.15114"
] | [] | TAGS
#arxiv-2002.02126 #arxiv-2109.12613 #arxiv-2110.15114 #region-us
| # AmazonBooks_m1
+ Data format:
user_id item1 item2 ...
+ Download: URL
+ Repository: URL
+ Used by papers:
- Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, Meng Wang. LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation. In SIGIR 2020.
- Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, Xiuqiang He. SimpleX: A Simple and Strong Baseline for Collaborative Filtering. In CIKM 2021.
- Kelong Mao, Jieming Zhu, Xi Xiao, Biao Lu, Zhaowei Wang, Xiuqiang He. UltraGCN: Ultra Simplification of Graph Convolutional Networks for Recommendation. In CIKM 2021.
+ Check the md5sum for data integrity:
| [
"# AmazonBooks_m1\n\n+ Data format: \nuser_id item1 item2 ...\n\n+ Download: URL\n+ Repository: URL\n\n+ Used by papers: \n - Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, Meng Wang. LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation. In SIGIR 2020.\n - Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, Xiuqiang He. SimpleX: A Simple and Strong Baseline for Collaborative Filtering. In CIKM 2021.\n - Kelong Mao, Jieming Zhu, Xi Xiao, Biao Lu, Zhaowei Wang, Xiuqiang He. UltraGCN: Ultra Simplification of Graph Convolutional Networks for Recommendation. In CIKM 2021.\n\n+ Check the md5sum for data integrity:"
] | [
"TAGS\n#arxiv-2002.02126 #arxiv-2109.12613 #arxiv-2110.15114 #region-us \n",
"# AmazonBooks_m1\n\n+ Data format: \nuser_id item1 item2 ...\n\n+ Download: URL\n+ Repository: URL\n\n+ Used by papers: \n - Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, Meng Wang. LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation. In SIGIR 2020.\n - Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, Xiuqiang He. SimpleX: A Simple and Strong Baseline for Collaborative Filtering. In CIKM 2021.\n - Kelong Mao, Jieming Zhu, Xi Xiao, Biao Lu, Zhaowei Wang, Xiuqiang He. UltraGCN: Ultra Simplification of Graph Convolutional Networks for Recommendation. In CIKM 2021.\n\n+ Check the md5sum for data integrity:"
] | [
32,
209
] | [
"passage: TAGS\n#arxiv-2002.02126 #arxiv-2109.12613 #arxiv-2110.15114 #region-us \n# AmazonBooks_m1\n\n+ Data format: \nuser_id item1 item2 ...\n\n+ Download: URL\n+ Repository: URL\n\n+ Used by papers: \n - Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, Meng Wang. LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation. In SIGIR 2020.\n - Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, Xiuqiang He. SimpleX: A Simple and Strong Baseline for Collaborative Filtering. In CIKM 2021.\n - Kelong Mao, Jieming Zhu, Xi Xiao, Biao Lu, Zhaowei Wang, Xiuqiang He. UltraGCN: Ultra Simplification of Graph Convolutional Networks for Recommendation. In CIKM 2021.\n\n+ Check the md5sum for data integrity:"
] |
ff4ac583bc5399b59234278fa5398b86061d0b61 | # AmazonCDs_m1
+ **Data format:**
user_id item1 item2 ...
+ **Download:** https://huggingface.co/datasets/reczoo/AmazonCDs_m1/tree/main
+ **Repository:** https://github.com/reczoo/Datasets
+ **Used by papers:**
- Jianing Sun, Wei Guo, Dengcheng Zhang, Yingxue Zhang, Florence Regol, Yaochen Hu, Huifeng Guo, Ruiming Tang, Han Yuan, Xiuqiang He, Mark Coates. [A Framework for Recommending Accurate and Diverse Items Using Bayesian Graph Convolutional Neural Networks](https://hyclex.github.io/papers/paper_sun2019BGCN.pdf). In KDD 2020.
- Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, Xiuqiang He. [SimpleX: A Simple and Strong Baseline for Collaborative Filtering](https://arxiv.org/abs/2109.12613). In CIKM 2021.
- Kelong Mao, Jieming Zhu, Xi Xiao, Biao Lu, Zhaowei Wang, Xiuqiang He. [UltraGCN: Ultra Simplification of Graph Convolutional Networks for Recommendation](https://arxiv.org/abs/2110.15114). In CIKM 2021.
+ **Check the md5sum for data integrity:**
```bash
$ md5sum *.txt
d29acb66d0fb74bc3bc0791cbbce5cf2 test.txt
2df6a35cac4373cf3eef95f75568da0a train.txt
```
| reczoo/AmazonCDs_m1 | [
"arxiv:2109.12613",
"arxiv:2110.15114",
"region:us"
] | 2023-12-21T14:06:50+00:00 | {} | 2023-12-21T15:18:14+00:00 | [
"2109.12613",
"2110.15114"
] | [] | TAGS
#arxiv-2109.12613 #arxiv-2110.15114 #region-us
| # AmazonCDs_m1
+ Data format:
user_id item1 item2 ...
+ Download: URL
+ Repository: URL
+ Used by papers:
- Jianing Sun, Wei Guo, Dengcheng Zhang, Yingxue Zhang, Florence Regol, Yaochen Hu, Huifeng Guo, Ruiming Tang, Han Yuan, Xiuqiang He, Mark Coates. A Framework for Recommending Accurate and Diverse Items Using Bayesian Graph Convolutional Neural Networks. In KDD 2020.
- Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, Xiuqiang He. SimpleX: A Simple and Strong Baseline for Collaborative Filtering. In CIKM 2021.
- Kelong Mao, Jieming Zhu, Xi Xiao, Biao Lu, Zhaowei Wang, Xiuqiang He. UltraGCN: Ultra Simplification of Graph Convolutional Networks for Recommendation. In CIKM 2021.
+ Check the md5sum for data integrity:
| [
"# AmazonCDs_m1\n\n+ Data format: \nuser_id item1 item2 ...\n\n+ Download: URL\n+ Repository: URL\n\n+ Used by papers: \n - Jianing Sun, Wei Guo, Dengcheng Zhang, Yingxue Zhang, Florence Regol, Yaochen Hu, Huifeng Guo, Ruiming Tang, Han Yuan, Xiuqiang He, Mark Coates. A Framework for Recommending Accurate and Diverse Items Using Bayesian Graph Convolutional Neural Networks. In KDD 2020.\n - Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, Xiuqiang He. SimpleX: A Simple and Strong Baseline for Collaborative Filtering. In CIKM 2021.\n - Kelong Mao, Jieming Zhu, Xi Xiao, Biao Lu, Zhaowei Wang, Xiuqiang He. UltraGCN: Ultra Simplification of Graph Convolutional Networks for Recommendation. In CIKM 2021.\n\n+ Check the md5sum for data integrity:"
] | [
"TAGS\n#arxiv-2109.12613 #arxiv-2110.15114 #region-us \n",
"# AmazonCDs_m1\n\n+ Data format: \nuser_id item1 item2 ...\n\n+ Download: URL\n+ Repository: URL\n\n+ Used by papers: \n - Jianing Sun, Wei Guo, Dengcheng Zhang, Yingxue Zhang, Florence Regol, Yaochen Hu, Huifeng Guo, Ruiming Tang, Han Yuan, Xiuqiang He, Mark Coates. A Framework for Recommending Accurate and Diverse Items Using Bayesian Graph Convolutional Neural Networks. In KDD 2020.\n - Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, Xiuqiang He. SimpleX: A Simple and Strong Baseline for Collaborative Filtering. In CIKM 2021.\n - Kelong Mao, Jieming Zhu, Xi Xiao, Biao Lu, Zhaowei Wang, Xiuqiang He. UltraGCN: Ultra Simplification of Graph Convolutional Networks for Recommendation. In CIKM 2021.\n\n+ Check the md5sum for data integrity:"
] | [
23,
249
] | [
"passage: TAGS\n#arxiv-2109.12613 #arxiv-2110.15114 #region-us \n# AmazonCDs_m1\n\n+ Data format: \nuser_id item1 item2 ...\n\n+ Download: URL\n+ Repository: URL\n\n+ Used by papers: \n - Jianing Sun, Wei Guo, Dengcheng Zhang, Yingxue Zhang, Florence Regol, Yaochen Hu, Huifeng Guo, Ruiming Tang, Han Yuan, Xiuqiang He, Mark Coates. A Framework for Recommending Accurate and Diverse Items Using Bayesian Graph Convolutional Neural Networks. In KDD 2020.\n - Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, Xiuqiang He. SimpleX: A Simple and Strong Baseline for Collaborative Filtering. In CIKM 2021.\n - Kelong Mao, Jieming Zhu, Xi Xiao, Biao Lu, Zhaowei Wang, Xiuqiang He. UltraGCN: Ultra Simplification of Graph Convolutional Networks for Recommendation. In CIKM 2021.\n\n+ Check the md5sum for data integrity:"
] |
d6951f1a171149ebb7729505984bbed2d4ab653a | # AmazonElectronics_m1
+ **Data format:**
Each user corresponds to a list of interacted items: [[item1, item2], [item3, item4, item5], ...]
+ **Source:** https://cseweb.ucsd.edu/~jmcauley/datasets.html
+ **Download:** https://huggingface.co/datasets/reczoo/AmazonElectronics_m1/tree/main
+ **RecZoo Datasets:** https://github.com/reczoo/Datasets
+ **Used by papers:**
- Wenhui Yu, Zheng Qin. [Sampler Design for Implicit Feedback Data by Noisy-label Robust Learning](https://arxiv.org/abs/2007.07204). In SIGIR 2020.
- Kelong Mao, Jieming Zhu, Xi Xiao, Biao Lu, Zhaowei Wang, Xiuqiang He. [UltraGCN: Ultra Simplification of Graph Convolutional Networks for Recommendation](https://arxiv.org/abs/2110.15114). In CIKM 2021.
+ **Check the md5sum for data integrity:**
```bash
$ md5sum *.json
7a0fa5d0da5dc5d5008da02b554ef688 test_data.json
ca71f3f5b9ada393ffd5490eba84c7db train_data.json
7f2db9b5b0de91c7d757ed6ed6095a5a validation_data.json
```
| reczoo/AmazonElectronics_m1 | [
"arxiv:2007.07204",
"arxiv:2110.15114",
"region:us"
] | 2023-12-21T14:08:29+00:00 | {} | 2023-12-27T06:57:44+00:00 | [
"2007.07204",
"2110.15114"
] | [] | TAGS
#arxiv-2007.07204 #arxiv-2110.15114 #region-us
| # AmazonElectronics_m1
+ Data format:
Each user corresponds to a list of interacted items: [[item1, item2], [item3, item4, item5], ...]
+ Source: URL
+ Download: URL
+ RecZoo Datasets: URL
+ Used by papers:
- Wenhui Yu, Zheng Qin. Sampler Design for Implicit Feedback Data by Noisy-label Robust Learning. In SIGIR 2020.
- Kelong Mao, Jieming Zhu, Xi Xiao, Biao Lu, Zhaowei Wang, Xiuqiang He. UltraGCN: Ultra Simplification of Graph Convolutional Networks for Recommendation. In CIKM 2021.
+ Check the md5sum for data integrity:
| [
"# AmazonElectronics_m1\n\n+ Data format:\n Each user corresponds to a list of interacted items: [[item1, item2], [item3, item4, item5], ...]\n\n+ Source: URL\n+ Download: URL\n+ RecZoo Datasets: URL\n\n+ Used by papers: \n - Wenhui Yu, Zheng Qin. Sampler Design for Implicit Feedback Data by Noisy-label Robust Learning. In SIGIR 2020.\n - Kelong Mao, Jieming Zhu, Xi Xiao, Biao Lu, Zhaowei Wang, Xiuqiang He. UltraGCN: Ultra Simplification of Graph Convolutional Networks for Recommendation. In CIKM 2021.\n\n+ Check the md5sum for data integrity:"
] | [
"TAGS\n#arxiv-2007.07204 #arxiv-2110.15114 #region-us \n",
"# AmazonElectronics_m1\n\n+ Data format:\n Each user corresponds to a list of interacted items: [[item1, item2], [item3, item4, item5], ...]\n\n+ Source: URL\n+ Download: URL\n+ RecZoo Datasets: URL\n\n+ Used by papers: \n - Wenhui Yu, Zheng Qin. Sampler Design for Implicit Feedback Data by Noisy-label Robust Learning. In SIGIR 2020.\n - Kelong Mao, Jieming Zhu, Xi Xiao, Biao Lu, Zhaowei Wang, Xiuqiang He. UltraGCN: Ultra Simplification of Graph Convolutional Networks for Recommendation. In CIKM 2021.\n\n+ Check the md5sum for data integrity:"
] | [
23,
172
] | [
"passage: TAGS\n#arxiv-2007.07204 #arxiv-2110.15114 #region-us \n# AmazonElectronics_m1\n\n+ Data format:\n Each user corresponds to a list of interacted items: [[item1, item2], [item3, item4, item5], ...]\n\n+ Source: URL\n+ Download: URL\n+ RecZoo Datasets: URL\n\n+ Used by papers: \n - Wenhui Yu, Zheng Qin. Sampler Design for Implicit Feedback Data by Noisy-label Robust Learning. In SIGIR 2020.\n - Kelong Mao, Jieming Zhu, Xi Xiao, Biao Lu, Zhaowei Wang, Xiuqiang He. UltraGCN: Ultra Simplification of Graph Convolutional Networks for Recommendation. In CIKM 2021.\n\n+ Check the md5sum for data integrity:"
] |
d3a792d4a67a919fa5a80eee00a30cc0ff8da97f | # AmazonMovies_m1
+ **Data format:**
user_id item1 item2 ...
+ **Source:** https://cseweb.ucsd.edu/~jmcauley/datasets.html
+ **Download:** https://huggingface.co/datasets/reczoo/AmazonMovies_m1/tree/main
+ **RecZoo Datasets:** https://github.com/reczoo/Datasets
+ **Used by papers:**
- Jianing Sun, Wei Guo, Dengcheng Zhang, Yingxue Zhang, Florence Regol, Yaochen Hu, Huifeng Guo, Ruiming Tang, Han Yuan, Xiuqiang He, Mark Coates. [A Framework for Recommending Accurate and Diverse Items Using Bayesian Graph Convolutional Neural Networks](https://hyclex.github.io/papers/paper_sun2019BGCN.pdf). In KDD 2020.
- Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, Xiuqiang He. [SimpleX: A Simple and Strong Baseline for Collaborative Filterin](https://arxiv.org/abs/2109.12613). In CIKM 2021.
+ **Check the md5sum for data integrity:**
```bash
$ md5sum *.txt
c02e5f6579aa51950aa875c462a0204b test.txt
3e9d30eacd30330a9feaa0fdb17760ba train.txt
```
| reczoo/AmazonMovies_m1 | [
"arxiv:2109.12613",
"region:us"
] | 2023-12-21T14:09:10+00:00 | {} | 2023-12-24T02:30:48+00:00 | [
"2109.12613"
] | [] | TAGS
#arxiv-2109.12613 #region-us
| # AmazonMovies_m1
+ Data format:
user_id item1 item2 ...
+ Source: URL
+ Download: URL
+ RecZoo Datasets: URL
+ Used by papers:
- Jianing Sun, Wei Guo, Dengcheng Zhang, Yingxue Zhang, Florence Regol, Yaochen Hu, Huifeng Guo, Ruiming Tang, Han Yuan, Xiuqiang He, Mark Coates. A Framework for Recommending Accurate and Diverse Items Using Bayesian Graph Convolutional Neural Networks. In KDD 2020.
- Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, Xiuqiang He. SimpleX: A Simple and Strong Baseline for Collaborative Filterin. In CIKM 2021.
+ Check the md5sum for data integrity:
| [
"# AmazonMovies_m1\n\n+ Data format: \nuser_id item1 item2 ...\n\n+ Source: URL\n+ Download: URL\n+ RecZoo Datasets: URL\n\n+ Used by papers: \n - Jianing Sun, Wei Guo, Dengcheng Zhang, Yingxue Zhang, Florence Regol, Yaochen Hu, Huifeng Guo, Ruiming Tang, Han Yuan, Xiuqiang He, Mark Coates. A Framework for Recommending Accurate and Diverse Items Using Bayesian Graph Convolutional Neural Networks. In KDD 2020.\n - Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, Xiuqiang He. SimpleX: A Simple and Strong Baseline for Collaborative Filterin. In CIKM 2021.\n\n+ Check the md5sum for data integrity:"
] | [
"TAGS\n#arxiv-2109.12613 #region-us \n",
"# AmazonMovies_m1\n\n+ Data format: \nuser_id item1 item2 ...\n\n+ Source: URL\n+ Download: URL\n+ RecZoo Datasets: URL\n\n+ Used by papers: \n - Jianing Sun, Wei Guo, Dengcheng Zhang, Yingxue Zhang, Florence Regol, Yaochen Hu, Huifeng Guo, Ruiming Tang, Han Yuan, Xiuqiang He, Mark Coates. A Framework for Recommending Accurate and Diverse Items Using Bayesian Graph Convolutional Neural Networks. In KDD 2020.\n - Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, Xiuqiang He. SimpleX: A Simple and Strong Baseline for Collaborative Filterin. In CIKM 2021.\n\n+ Check the md5sum for data integrity:"
] | [
14,
199
] | [
"passage: TAGS\n#arxiv-2109.12613 #region-us \n# AmazonMovies_m1\n\n+ Data format: \nuser_id item1 item2 ...\n\n+ Source: URL\n+ Download: URL\n+ RecZoo Datasets: URL\n\n+ Used by papers: \n - Jianing Sun, Wei Guo, Dengcheng Zhang, Yingxue Zhang, Florence Regol, Yaochen Hu, Huifeng Guo, Ruiming Tang, Han Yuan, Xiuqiang He, Mark Coates. A Framework for Recommending Accurate and Diverse Items Using Bayesian Graph Convolutional Neural Networks. In KDD 2020.\n - Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, Xiuqiang He. SimpleX: A Simple and Strong Baseline for Collaborative Filterin. In CIKM 2021.\n\n+ Check the md5sum for data integrity:"
] |
58b9cc7c3b1cd185cce221e0f32b9cc8406dd9fe | # CiteUlikeA_m1
+ **Data format:**
user_id item1 item2 ...
+ **Source:** http://www.citeulike.org
+ **Download:** https://huggingface.co/datasets/reczoo/CiteUlikeA_m1/tree/main
+ **RecZoo Datasets:** https://github.com/reczoo/Datasets
+ **Used by papers:**
- Shuyi Ji, Yifan Feng, Rongrong Ji, Xibin Zhao, Wanwan Tang, Yue Gao. [Dual Channel Hypergraph Collaborative Filtering](https://dl.acm.org/doi/10.1145/3394486.3403253). In KDD 2020.
- Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, Xiuqiang He. [SimpleX: A Simple and Strong Baseline for Collaborative Filterin](https://arxiv.org/abs/2109.12613). In CIKM 2021.
+ **Check the md5sum for data integrity:**
```bash
$ md5sum *.txt
c9d2de139ac69d480264b6221a567324 test.txt
f037c7ac8f9d8142bb5fd137ff61ad0c train.txt
```
| reczoo/CiteUlikeA_m1 | [
"arxiv:2109.12613",
"region:us"
] | 2023-12-21T14:10:01+00:00 | {} | 2023-12-24T02:41:40+00:00 | [
"2109.12613"
] | [] | TAGS
#arxiv-2109.12613 #region-us
| # CiteUlikeA_m1
+ Data format:
user_id item1 item2 ...
+ Source: URL
+ Download: URL
+ RecZoo Datasets: URL
+ Used by papers:
- Shuyi Ji, Yifan Feng, Rongrong Ji, Xibin Zhao, Wanwan Tang, Yue Gao. Dual Channel Hypergraph Collaborative Filtering. In KDD 2020.
- Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, Xiuqiang He. SimpleX: A Simple and Strong Baseline for Collaborative Filterin. In CIKM 2021.
+ Check the md5sum for data integrity:
| [
"# CiteUlikeA_m1\n\n+ Data format: \nuser_id item1 item2 ...\n\n+ Source: URL\n+ Download: URL\n+ RecZoo Datasets: URL\n\n+ Used by papers: \n - Shuyi Ji, Yifan Feng, Rongrong Ji, Xibin Zhao, Wanwan Tang, Yue Gao. Dual Channel Hypergraph Collaborative Filtering. In KDD 2020.\n - Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, Xiuqiang He. SimpleX: A Simple and Strong Baseline for Collaborative Filterin. In CIKM 2021.\n\n+ Check the md5sum for data integrity:"
] | [
"TAGS\n#arxiv-2109.12613 #region-us \n",
"# CiteUlikeA_m1\n\n+ Data format: \nuser_id item1 item2 ...\n\n+ Source: URL\n+ Download: URL\n+ RecZoo Datasets: URL\n\n+ Used by papers: \n - Shuyi Ji, Yifan Feng, Rongrong Ji, Xibin Zhao, Wanwan Tang, Yue Gao. Dual Channel Hypergraph Collaborative Filtering. In KDD 2020.\n - Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, Xiuqiang He. SimpleX: A Simple and Strong Baseline for Collaborative Filterin. In CIKM 2021.\n\n+ Check the md5sum for data integrity:"
] | [
14,
153
] | [
"passage: TAGS\n#arxiv-2109.12613 #region-us \n# CiteUlikeA_m1\n\n+ Data format: \nuser_id item1 item2 ...\n\n+ Source: URL\n+ Download: URL\n+ RecZoo Datasets: URL\n\n+ Used by papers: \n - Shuyi Ji, Yifan Feng, Rongrong Ji, Xibin Zhao, Wanwan Tang, Yue Gao. Dual Channel Hypergraph Collaborative Filtering. In KDD 2020.\n - Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, Xiuqiang He. SimpleX: A Simple and Strong Baseline for Collaborative Filterin. In CIKM 2021.\n\n+ Check the md5sum for data integrity:"
] |
d18cf8efd0f159548cd674c7320ca9e39046acb8 | # Gowalla_m1
+ **Dataset description:**
The dataset statistics are summarized as follows:
| Dataset ID | #Users | #Items | #Interactions | #Train | #Test | Density |
|:--------------:|:------:|:------:|:-------------:|:---------:|:-------:|:-------:|
| Gowalla_m1 | 29,858 | 40,981 | 1,027,370 | 810,128 | 217,242 | 0.00084 |
+ **Source:** https://snap.stanford.edu/data/loc-gowalla.html
+ **Download:** https://huggingface.co/datasets/reczoo/Gowalla_m1/tree/main
+ **RecZoo Datasets:** https://github.com/reczoo/Datasets
+ **Used by papers:**
- Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, Meng Wang. [LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation](https://arxiv.org/abs/2002.02126). In SIGIR 2020.
- Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, Xiuqiang He. [SimpleX: A Simple and Strong Baseline for Collaborative Filtering](https://arxiv.org/abs/2109.12613). In CIKM 2021.
- Kelong Mao, Jieming Zhu, Xi Xiao, Biao Lu, Zhaowei Wang, Xiuqiang He. [UltraGCN: Ultra Simplification of Graph Convolutional Networks for Recommendation](https://arxiv.org/abs/2110.15114). In CIKM 2021.
+ **Check the md5sum for data integrity:**
```bash
$ md5sum *.txt
13b1c0d75b07b8cea9413f40042f476f item_list.txt
c04e2c4bcd2389f53ed8281816166149 test.txt
5eec1eb2edb8dd648377d348b8e136cf train.txt
f83ec6f2cd974ba6470e8808830cc144 user_list.txt
```
| reczoo/Gowalla_m1 | [
"arxiv:2002.02126",
"arxiv:2109.12613",
"arxiv:2110.15114",
"region:us"
] | 2023-12-21T14:10:34+00:00 | {} | 2023-12-23T06:44:01+00:00 | [
"2002.02126",
"2109.12613",
"2110.15114"
] | [] | TAGS
#arxiv-2002.02126 #arxiv-2109.12613 #arxiv-2110.15114 #region-us
| Gowalla\_m1
===========
* Dataset description:
The dataset statistics are summarized as follows:
* Source: URL
* Download: URL
* RecZoo Datasets: URL
* Used by papers:
+ Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, Meng Wang. LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation. In SIGIR 2020.
+ Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, Xiuqiang He. SimpleX: A Simple and Strong Baseline for Collaborative Filtering. In CIKM 2021.
+ Kelong Mao, Jieming Zhu, Xi Xiao, Biao Lu, Zhaowei Wang, Xiuqiang He. UltraGCN: Ultra Simplification of Graph Convolutional Networks for Recommendation. In CIKM 2021.
* Check the md5sum for data integrity:
| [] | [
"TAGS\n#arxiv-2002.02126 #arxiv-2109.12613 #arxiv-2110.15114 #region-us \n"
] | [
32
] | [
"passage: TAGS\n#arxiv-2002.02126 #arxiv-2109.12613 #arxiv-2110.15114 #region-us \n"
] |
1186acccb5a9d3efb310a9dd842a65cc4b666c60 | # Movielens1M_m1
+ **Dataset description:**
The MovieLens-1M dataset contain 1,000,209 anonymous ratings of approximately 3,900 movies made by 6,040 MovieLens users. We follow the LCF work to split and preprocess the data into training, validation, and test sets, respectively.
+ **Data format:**
Each user corresponds to a list of interacted items: [[item1, item2], [item3, item4, item5], ...]
+ **Source:** https://grouplens.org/datasets/movielens/1m/
+ **Download:** https://huggingface.co/datasets/reczoo/Movielens1M_m1/tree/main
+ **RecZoo Datasets:** https://github.com/reczoo/Datasets
+ **Used by papers:**
- Wenhui Yu, Zheng Qin. [Graph Convolutional Network for Recommendation with Low-pass Collaborative Filters](https://arxiv.org/abs/2006.15516). In ICML 2020.
- Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, Xiuqiang He. [SimpleX: A Simple and Strong Baseline for Collaborative Filtering](https://arxiv.org/abs/2109.12613). In CIKM 2021.
- Kelong Mao, Jieming Zhu, Xi Xiao, Biao Lu, Zhaowei Wang, Xiuqiang He. [UltraGCN: Ultra Simplification of Graph Convolutional Networks for Recommendation](https://arxiv.org/abs/2110.15114). In CIKM 2021.
+ **Check the md5sum for data integrity:**
```bash
$ md5sum *.json
cdd3ad819512cb87dad2f098c8437df2 test_data.json
4229bc5369f943918103daf7fd92e920 train_data.json
60be3b377d39806f80a43e37c94449f6 validation_data.json
```
| reczoo/Movielens1M_m1 | [
"arxiv:2006.15516",
"arxiv:2109.12613",
"arxiv:2110.15114",
"region:us"
] | 2023-12-21T14:11:46+00:00 | {} | 2023-12-24T01:18:46+00:00 | [
"2006.15516",
"2109.12613",
"2110.15114"
] | [] | TAGS
#arxiv-2006.15516 #arxiv-2109.12613 #arxiv-2110.15114 #region-us
| # Movielens1M_m1
+ Dataset description:
The MovieLens-1M dataset contain 1,000,209 anonymous ratings of approximately 3,900 movies made by 6,040 MovieLens users. We follow the LCF work to split and preprocess the data into training, validation, and test sets, respectively.
+ Data format:
Each user corresponds to a list of interacted items: [[item1, item2], [item3, item4, item5], ...]
+ Source: URL
+ Download: URL
+ RecZoo Datasets: URL
+ Used by papers:
- Wenhui Yu, Zheng Qin. Graph Convolutional Network for Recommendation with Low-pass Collaborative Filters. In ICML 2020.
- Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, Xiuqiang He. SimpleX: A Simple and Strong Baseline for Collaborative Filtering. In CIKM 2021.
- Kelong Mao, Jieming Zhu, Xi Xiao, Biao Lu, Zhaowei Wang, Xiuqiang He. UltraGCN: Ultra Simplification of Graph Convolutional Networks for Recommendation. In CIKM 2021.
+ Check the md5sum for data integrity:
| [
"# Movielens1M_m1\n\n+ Dataset description:\n \n The MovieLens-1M dataset contain 1,000,209 anonymous ratings of approximately 3,900 movies made by 6,040 MovieLens users. We follow the LCF work to split and preprocess the data into training, validation, and test sets, respectively.\n\n+ Data format:\n\n Each user corresponds to a list of interacted items: [[item1, item2], [item3, item4, item5], ...]\n\n+ Source: URL\n+ Download: URL\n+ RecZoo Datasets: URL\n\n+ Used by papers:\n - Wenhui Yu, Zheng Qin. Graph Convolutional Network for Recommendation with Low-pass Collaborative Filters. In ICML 2020.\n - Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, Xiuqiang He. SimpleX: A Simple and Strong Baseline for Collaborative Filtering. In CIKM 2021.\n - Kelong Mao, Jieming Zhu, Xi Xiao, Biao Lu, Zhaowei Wang, Xiuqiang He. UltraGCN: Ultra Simplification of Graph Convolutional Networks for Recommendation. In CIKM 2021.\n \n+ Check the md5sum for data integrity:"
] | [
"TAGS\n#arxiv-2006.15516 #arxiv-2109.12613 #arxiv-2110.15114 #region-us \n",
"# Movielens1M_m1\n\n+ Dataset description:\n \n The MovieLens-1M dataset contain 1,000,209 anonymous ratings of approximately 3,900 movies made by 6,040 MovieLens users. We follow the LCF work to split and preprocess the data into training, validation, and test sets, respectively.\n\n+ Data format:\n\n Each user corresponds to a list of interacted items: [[item1, item2], [item3, item4, item5], ...]\n\n+ Source: URL\n+ Download: URL\n+ RecZoo Datasets: URL\n\n+ Used by papers:\n - Wenhui Yu, Zheng Qin. Graph Convolutional Network for Recommendation with Low-pass Collaborative Filters. In ICML 2020.\n - Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, Xiuqiang He. SimpleX: A Simple and Strong Baseline for Collaborative Filtering. In CIKM 2021.\n - Kelong Mao, Jieming Zhu, Xi Xiao, Biao Lu, Zhaowei Wang, Xiuqiang He. UltraGCN: Ultra Simplification of Graph Convolutional Networks for Recommendation. In CIKM 2021.\n \n+ Check the md5sum for data integrity:"
] | [
32,
293
] | [
"passage: TAGS\n#arxiv-2006.15516 #arxiv-2109.12613 #arxiv-2110.15114 #region-us \n# Movielens1M_m1\n\n+ Dataset description:\n \n The MovieLens-1M dataset contain 1,000,209 anonymous ratings of approximately 3,900 movies made by 6,040 MovieLens users. We follow the LCF work to split and preprocess the data into training, validation, and test sets, respectively.\n\n+ Data format:\n\n Each user corresponds to a list of interacted items: [[item1, item2], [item3, item4, item5], ...]\n\n+ Source: URL\n+ Download: URL\n+ RecZoo Datasets: URL\n\n+ Used by papers:\n - Wenhui Yu, Zheng Qin. Graph Convolutional Network for Recommendation with Low-pass Collaborative Filters. In ICML 2020.\n - Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, Xiuqiang He. SimpleX: A Simple and Strong Baseline for Collaborative Filtering. In CIKM 2021.\n - Kelong Mao, Jieming Zhu, Xi Xiao, Biao Lu, Zhaowei Wang, Xiuqiang He. UltraGCN: Ultra Simplification of Graph Convolutional Networks for Recommendation. In CIKM 2021.\n \n+ Check the md5sum for data integrity:"
] |
8c6e8b56e5a674b3b60ed859a617cb001b8280d0 | # Yelp18_m1
+ **Dataset description:**
The data statistics are summarized as follows:
| Dataset ID | #Users | #Items | #Interactions | #Train | #Test | Density |
| :-------: | :----: | :----: | :-----------: | :-------: | :-----: | :-----: |
| Yelp18_m1 | 31,668 | 38,048 | 1,561,406 | 1,237,259 | 324,147 | 0.00130 |
+ **Data format:**
user_id item1 item2 ...
+ **Source:** https://www.yelp.com/dataset
+ **Download:** https://huggingface.co/datasets/reczoo/Yelp18_m1/tree/main
+ **RecZoo Datasets:** https://github.com/reczoo/Datasets
+ **Used by papers:**
- Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, Meng Wang. [LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation](https://arxiv.org/abs/2002.02126). In SIGIR 2020.
- Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, Xiuqiang He. [SimpleX: A Simple and Strong Baseline for Collaborative Filtering](https://arxiv.org/abs/2109.12613). In CIKM 2021.
- Kelong Mao, Jieming Zhu, Xi Xiao, Biao Lu, Zhaowei Wang, Xiuqiang He. [UltraGCN: Ultra Simplification of Graph Convolutional Networks for Recommendation](https://arxiv.org/abs/2110.15114). In CIKM 2021.
+ **Check the md5sum for data integrity:**
```bash
$ md5sum *.txt
520fe559761ff2c654629201c807f353 item_list.txt
0d57d7399862c32152b045ec5d2698e7 test.txt
1b8b5d22a227e01d6de002c53d32b4c4 train.txt
ae4f810cd6e827f10fc418753c7d92f9 user_list.txt
```
| reczoo/Yelp18_m1 | [
"arxiv:2002.02126",
"arxiv:2109.12613",
"arxiv:2110.15114",
"region:us"
] | 2023-12-21T14:12:20+00:00 | {} | 2023-12-24T02:22:48+00:00 | [
"2002.02126",
"2109.12613",
"2110.15114"
] | [] | TAGS
#arxiv-2002.02126 #arxiv-2109.12613 #arxiv-2110.15114 #region-us
| Yelp18\_m1
==========
* Dataset description:
The data statistics are summarized as follows:
* Data format:
user\_id item1 item2 ...
* Source: URL
* Download: URL
* RecZoo Datasets: URL
* Used by papers:
+ Xiangnan He, Kuan Deng, Xiang Wang, Yan Li, Yongdong Zhang, Meng Wang. LightGCN: Simplifying and Powering Graph Convolution Network for Recommendation. In SIGIR 2020.
+ Kelong Mao, Jieming Zhu, Jinpeng Wang, Quanyu Dai, Zhenhua Dong, Xi Xiao, Xiuqiang He. SimpleX: A Simple and Strong Baseline for Collaborative Filtering. In CIKM 2021.
+ Kelong Mao, Jieming Zhu, Xi Xiao, Biao Lu, Zhaowei Wang, Xiuqiang He. UltraGCN: Ultra Simplification of Graph Convolutional Networks for Recommendation. In CIKM 2021.
* Check the md5sum for data integrity:
| [] | [
"TAGS\n#arxiv-2002.02126 #arxiv-2109.12613 #arxiv-2110.15114 #region-us \n"
] | [
32
] | [
"passage: TAGS\n#arxiv-2002.02126 #arxiv-2109.12613 #arxiv-2110.15114 #region-us \n"
] |
3a14e110981b9cdc7e4701fc40d1c2154a5aaaae |
# The Unsplash Lite Dataset (v1.2.1)

The Lite dataset contains all of the same fields as the Full dataset, but is limited to ~25,000 photos.
It can be used for both commercial and non-commercial usage, provided you abide by [the terms](https://github.com/unsplash/datasets/blob/master/TERMS.md).
The Unsplash Dataset is made available for research purposes.
[It cannot be used to redistribute the images contained within](https://github.com/unsplash/datasets/blob/master/TERMS.md).
To use the Unsplash library in a product, see [the Unsplash API](https://unsplash.com/developers).
 | 1aurent/unsplash-lite | [
"task_categories:text-to-image",
"task_categories:image-to-text",
"size_categories:10K<n<100K",
"language:en",
"license:other",
"unsplash",
"v1.2.1",
"region:us"
] | 2023-12-21T14:42:58+00:00 | {"language": ["en"], "license": "other", "size_categories": ["10K<n<100K"], "task_categories": ["text-to-image", "image-to-text"], "pretty_name": "Unsplash Lite", "dataset_info": {"features": [{"name": "photo", "struct": [{"name": "id", "dtype": "string"}, {"name": "url", "dtype": "string"}, {"name": "image_url", "dtype": "string"}, {"name": "submitted_at", "dtype": "string"}, {"name": "featured", "dtype": "bool"}, {"name": "width", "dtype": "uint16"}, {"name": "height", "dtype": "uint16"}, {"name": "aspect_ratio", "dtype": "float32"}, {"name": "description", "dtype": "string"}, {"name": "blur_hash", "dtype": "string"}]}, {"name": "photographer", "struct": [{"name": "username", "dtype": "string"}, {"name": "first_name", "dtype": "string"}, {"name": "last_name", "dtype": "string"}]}, {"name": "exif", "struct": [{"name": "camera_make", "dtype": "string"}, {"name": "camera_model", "dtype": "string"}, {"name": "iso", "dtype": "string"}, {"name": "aperture_value", "dtype": "string"}, {"name": "focal_length", "dtype": "string"}, {"name": "exposure_time", "dtype": "string"}]}, {"name": "location", "struct": [{"name": "name", "dtype": "string"}, {"name": "latitude", "dtype": "float32"}, {"name": "longitude", "dtype": "float32"}, {"name": "country", "dtype": "string"}, {"name": "city", "dtype": "string"}]}, {"name": "stats", "struct": [{"name": "views", "dtype": "uint32"}, {"name": "downloads", "dtype": "uint32"}]}, {"name": "ai", "struct": [{"name": "description", "dtype": "string"}, {"name": "primary_landmark_name", "dtype": "string"}, {"name": "primary_landmark_latitude", "dtype": "string"}, {"name": "primary_landmark_longitude", "dtype": "string"}, {"name": "primary_landmark_confidence", "dtype": "string"}]}, {"name": "keywords", "list": [{"name": "keyword", "dtype": "string"}, {"name": "ai_service_1_confidence", "dtype": "string"}, {"name": "ai_service_2_confidence", "dtype": "string"}, {"name": "suggested_by_user", "dtype": "bool"}]}, {"name": "collections", "list": [{"name": "collection_id", "dtype": "string"}, {"name": "collection_title", "dtype": "string"}, {"name": "photo_collected_at", "dtype": "string"}]}, {"name": "conversions", "list": [{"name": "converted_at", "dtype": "string"}, {"name": "conversion_type", "dtype": "string"}, {"name": "keyword", "dtype": "string"}, {"name": "anonymous_user_id", "dtype": "string"}, {"name": "conversion_country", "dtype": "string"}]}, {"name": "colors", "list": [{"name": "hex", "dtype": "string"}, {"name": "red", "dtype": "uint8"}, {"name": "green", "dtype": "uint8"}, {"name": "blue", "dtype": "uint8"}, {"name": "keyword", "dtype": "string"}, {"name": "ai_coverage", "dtype": "float32"}, {"name": "ai_score", "dtype": "float32"}]}], "splits": [{"name": "train", "num_bytes": 1202216966, "num_examples": 25000}], "download_size": 618337921, "dataset_size": 1202216966}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "license_name": "unsplash-commercial", "license_link": "https://github.com/unsplash/datasets/blob/master/DOCS.md", "tags": ["unsplash", "v1.2.1"]} | 2023-12-21T15:01:44+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-to-image #task_categories-image-to-text #size_categories-10K<n<100K #language-English #license-other #unsplash #v1.2.1 #region-us
|
# The Unsplash Lite Dataset (v1.2.1)
\n\n\n\n\n\n | peterandrew987/train-indo-squad-v2-train | [
"region:us"
] | 2023-12-21T14:48:57+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "context", "dtype": "string"}, {"name": "question", "dtype": "string"}, {"name": "answers", "struct": [{"name": "answer_start", "dtype": "int64"}, {"name": "text", "dtype": "string"}]}, {"name": "indonesian_answers", "struct": [{"name": "answer_end", "dtype": "int64"}, {"name": "answer_start", "dtype": "int64"}, {"name": "text", "dtype": "string"}]}, {"name": "postags", "sequence": {"sequence": {"sequence": "string"}}}], "splits": [{"name": "train", "num_bytes": 97119695, "num_examples": 82029}], "download_size": 19254988, "dataset_size": 97119695}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-21T15:08:08+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "train-indo-squad-v2-train"
More Information needed | [
"# Dataset Card for \"train-indo-squad-v2-train\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"train-indo-squad-v2-train\"\n\nMore Information needed"
] | [
6,
22
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"train-indo-squad-v2-train\"\n\nMore Information needed"
] |
af76212df3a00064ffbffa767ef8c242b2b8aa19 |
## Dataset Description
- **Homepage:** [Projecte AINA](https://projecteaina.cat/tech/)
- **Repository:** [HuggingFace](https://huggingface.co/projecte-aina)
- **Paper:** N/A
- **Leaderboard:** N/A
- **Point of Contact:** [email protected]
### Dataset Summary
Mentor_CA is the professionally-translated open source dataset of 10,175 instructions in Spanish commissioned by BSC to [M47 Labs](https://www.m47labs.com/). It is organized in several of the behavioral categories outlined in the [InstructGPT](https://arxiv.org/abs/2203.02155) paper, including closed QA, open QA, general QA, classification, information extraction, summarization, creative writing and brainstorming.
### Supported Tasks and Leaderboards
This dataset is intended for instructing Foundational LLM for diverse downstream tasks
### Languages
This dataset is in Catalan (ca-CA).
## Dataset Structure
### Data Instances
The dataset is provided in a DOLLY json format, with records to a single instruction-following instance and contains the category, the instruction, a context, if available, and the response.
| category | instruction | context | response |
|-----------|-------------------------------------------------------|---------------------------------------------------------------------------------------------------------------------------------------------------------------------|---------------------------------------------|
| open_qa | Qui va inventar el nus de corbata més usat del món? | | L'inventor del nus de corbata més usat del món el va inventar Eduard VIII, duc de Windsor. |
### Data Fields
- `category`: text string containing the type of instruction.
- `instruction`: text string containing the prompt.
- `context`: text string containing the information where the response is based on. These are only available for closed QA, information extraction and summarization.
- `answer`: text string containing the response to the instruction.
### Data Splits
We provide canonical splits for Mentor_CA other than the categories used for generating the dataset.
| Category | Number of instructions |
|----------------|------------------|
| Open_QA | 2500 |
| General_QA | 1500 |
| Classification | 1450 |
| Closed_QA | 1250 |
| Brainstorming | 1200 |
| Information_extraction | 1000 |
| Summarization | 800 |
| Creative_writing | 475 |
## Dataset Creation
### Curation Rationale
Mentor_CA is an open-source dataset of 10,175 records commissioned by BSC to [M47 Labs](https://www.m47labs.com/) to enable large language models to exhibit conversational interactivity. Annotators were asked to create prompt-response pairs in each of eight different instruction categories, including the seven described in the InstructGPT paper, as well as an open-ended free-form category (General QA). Annotators were allowed to use information from any source on the web to gather text fragments for the `context` field in closed QA, information extraction and summarization, and were explicitly instructed to rephrase any response that came directly from the web. They were also asked to evenly distribute the number of questions with the number of topics, which are included in the [topic list file](https://huggingface.co/datasets/projecte-aina/MentorES/blob/main/topic_list.csv). Examples of each behavior were provided to motivate the types of questions and instructions appropriate for each category.
### Source Data
- **Human-generated data**: The annotators were asked to create prompt / response pairs in each of eight different instruction categories.
- **Web**: For instruction categories that require a reference text (closed QA, information extraction and summarization) contributors selected passages from any website. No guidance was given to annotators as to how to select the target passages. If any response was taken from the web, it had to be rephrased.
#### Initial Data Collection and Normalization
To create a dataset, annotators were given a brief description of the annotation task, as well as format specifications for prompts and responses separately. Examples were also provided for each task.
The guidelines were concise by design to encourage a high rate of task completion and freedom of writing. However, care was taken to ensure that the categories were clear and that the boundaries between them did not overlap. For example, closed QA was formulated to include questions that focused on the 5W interrogative pronouns: Who (quién), What (qué), When (cuándo), Where (dónde), Why (por qué); Information extraction could be confused with summarization or closed QA, so the prompt had to include a clear order to extract some kind of information from the given reference text.
#### Who are the source language producers?
The data was generated entirely by annotators at [M47 Labs](https://www.m47labs.com/). Text obtained from the web for the `context` field was kept as is, while the `response` field was rewritten.
### Annotations
The annotation guidelines for each of the categories are as follows:
- **Closed QA** (closed_qa): Questions that can only be answered from a reference text. The annotators must provide a text from any web page and ask a question whose answer is found in the text.
- **Open QA** (open_qa): Questions of common culture that can be answered without consulting any source or with a simple search on the Internet.
- **General QA** (general_qa): Questions that are very general and do not necessarily have to be objective. In fact, it is desirable that they be as subjective as possible.
- **Classification** (classification): Questions that serve to obtain classifications or categorizations of a list of items in different categories to which they may belong.
- **Information Extraction** (inf_ext): Questions used to extract a list of data or information from a reference text.
- **Summarization** (summarization): Questions to ask for a summary or synthesis of a text provided by the annotator.
- **Creative Writing** (creative_wr): Questions that should be order-oriented to obtain an original text (a story, a letter, a song, an article, a poem, a narrative, etc.). original text (a story, a letter, a song, an article, a poem, a narrative, etc.).
- **Brainstorming** (brainstorming): Questions to obtain a list of ideas or possible options to an issue.
#### Annotation process
The annotators were divided into two groups, with one group collecting reference text and asking a question, and the other group providing a response to the instruction.
#### Who are the annotators?
While labels and text were produced by humans, no further information about the people or systems involved was provided when acquiring this resource.
### Personal and Sensitive Information
This dataset contains public information (e.g., some information from the web). To our knowledge, there are no private person’s personal identifiers or sensitive information.
## Considerations for Using the Data
### Social Impact of Dataset
[N/A]
### Discussion of Biases
[N/A]
### Other Known Limitations
- The contents of this dataset may reflect the bias, factual errors and topical focus found in the web.
- Some annotators may not be native Spanish speakers.
- Annotator demographics and subject matter may reflect the makeup of M47 Labs employees.
## Additional Information
### Dataset Curators
[M47 Labs](https://www.m47labs.com/) was in charge of collecting and annotating the dataset. The Language Technologies Unit (langtTsc.es) at the Barcelona Supercomputing Center (BSC) commissioned it to make it open-source.
This work was funded by the [Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya](https://politiquesdigitals.gencat.cat/ca/inici/) within the framework of [Projecte AINA](https://projecteaina.cat/tech/).
### Licensing Information
This dataset can be used for any purpose, whether academic or commercial, under the terms of the [CC0 1.0](https://creativecommons.org/publicdomain/zero/1.0/)
### Citation Information
[N/A]
### Contributions
Thanks to [M47 Labs](https://www.m47labs.com/) for the creation of this dataset. | projecte-aina/MENTOR_CA | [
"task_categories:conversational",
"task_categories:question-answering",
"size_categories:1K<n<10K",
"language:ca",
"license:apache-2.0",
"arxiv:2203.02155",
"region:us"
] | 2023-12-21T15:13:08+00:00 | {"language": ["ca"], "license": "apache-2.0", "size_categories": ["1K<n<10K"], "task_categories": ["conversational", "question-answering"], "pretty_name": "Mentor_CA"} | 2024-01-25T08:20:30+00:00 | [
"2203.02155"
] | [
"ca"
] | TAGS
#task_categories-conversational #task_categories-question-answering #size_categories-1K<n<10K #language-Catalan #license-apache-2.0 #arxiv-2203.02155 #region-us
| Dataset Description
-------------------
* Homepage: Projecte AINA
* Repository: HuggingFace
* Paper: N/A
* Leaderboard: N/A
* Point of Contact: langtech@URL
### Dataset Summary
Mentor\_CA is the professionally-translated open source dataset of 10,175 instructions in Spanish commissioned by BSC to M47 Labs. It is organized in several of the behavioral categories outlined in the InstructGPT paper, including closed QA, open QA, general QA, classification, information extraction, summarization, creative writing and brainstorming.
### Supported Tasks and Leaderboards
This dataset is intended for instructing Foundational LLM for diverse downstream tasks
### Languages
This dataset is in Catalan (ca-CA).
Dataset Structure
-----------------
### Data Instances
The dataset is provided in a DOLLY json format, with records to a single instruction-following instance and contains the category, the instruction, a context, if available, and the response.
### Data Fields
* 'category': text string containing the type of instruction.
* 'instruction': text string containing the prompt.
* 'context': text string containing the information where the response is based on. These are only available for closed QA, information extraction and summarization.
* 'answer': text string containing the response to the instruction.
### Data Splits
We provide canonical splits for Mentor\_CA other than the categories used for generating the dataset.
Dataset Creation
----------------
### Curation Rationale
Mentor\_CA is an open-source dataset of 10,175 records commissioned by BSC to M47 Labs to enable large language models to exhibit conversational interactivity. Annotators were asked to create prompt-response pairs in each of eight different instruction categories, including the seven described in the InstructGPT paper, as well as an open-ended free-form category (General QA). Annotators were allowed to use information from any source on the web to gather text fragments for the 'context' field in closed QA, information extraction and summarization, and were explicitly instructed to rephrase any response that came directly from the web. They were also asked to evenly distribute the number of questions with the number of topics, which are included in the topic list file. Examples of each behavior were provided to motivate the types of questions and instructions appropriate for each category.
### Source Data
* Human-generated data: The annotators were asked to create prompt / response pairs in each of eight different instruction categories.
* Web: For instruction categories that require a reference text (closed QA, information extraction and summarization) contributors selected passages from any website. No guidance was given to annotators as to how to select the target passages. If any response was taken from the web, it had to be rephrased.
#### Initial Data Collection and Normalization
To create a dataset, annotators were given a brief description of the annotation task, as well as format specifications for prompts and responses separately. Examples were also provided for each task.
The guidelines were concise by design to encourage a high rate of task completion and freedom of writing. However, care was taken to ensure that the categories were clear and that the boundaries between them did not overlap. For example, closed QA was formulated to include questions that focused on the 5W interrogative pronouns: Who (quién), What (qué), When (cuándo), Where (dónde), Why (por qué); Information extraction could be confused with summarization or closed QA, so the prompt had to include a clear order to extract some kind of information from the given reference text.
#### Who are the source language producers?
The data was generated entirely by annotators at M47 Labs. Text obtained from the web for the 'context' field was kept as is, while the 'response' field was rewritten.
### Annotations
The annotation guidelines for each of the categories are as follows:
* Closed QA (closed\_qa): Questions that can only be answered from a reference text. The annotators must provide a text from any web page and ask a question whose answer is found in the text.
* Open QA (open\_qa): Questions of common culture that can be answered without consulting any source or with a simple search on the Internet.
* General QA (general\_qa): Questions that are very general and do not necessarily have to be objective. In fact, it is desirable that they be as subjective as possible.
* Classification (classification): Questions that serve to obtain classifications or categorizations of a list of items in different categories to which they may belong.
* Information Extraction (inf\_ext): Questions used to extract a list of data or information from a reference text.
* Summarization (summarization): Questions to ask for a summary or synthesis of a text provided by the annotator.
* Creative Writing (creative\_wr): Questions that should be order-oriented to obtain an original text (a story, a letter, a song, an article, a poem, a narrative, etc.). original text (a story, a letter, a song, an article, a poem, a narrative, etc.).
* Brainstorming (brainstorming): Questions to obtain a list of ideas or possible options to an issue.
#### Annotation process
The annotators were divided into two groups, with one group collecting reference text and asking a question, and the other group providing a response to the instruction.
#### Who are the annotators?
While labels and text were produced by humans, no further information about the people or systems involved was provided when acquiring this resource.
### Personal and Sensitive Information
This dataset contains public information (e.g., some information from the web). To our knowledge, there are no private person’s personal identifiers or sensitive information.
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
[N/A]
### Discussion of Biases
[N/A]
### Other Known Limitations
* The contents of this dataset may reflect the bias, factual errors and topical focus found in the web.
* Some annotators may not be native Spanish speakers.
* Annotator demographics and subject matter may reflect the makeup of M47 Labs employees.
Additional Information
----------------------
### Dataset Curators
M47 Labs was in charge of collecting and annotating the dataset. The Language Technologies Unit (URL) at the Barcelona Supercomputing Center (BSC) commissioned it to make it open-source.
This work was funded by the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA.
### Licensing Information
This dataset can be used for any purpose, whether academic or commercial, under the terms of the CC0 1.0
[N/A]
### Contributions
Thanks to M47 Labs for the creation of this dataset.
| [
"### Dataset Summary\n\n\nMentor\\_CA is the professionally-translated open source dataset of 10,175 instructions in Spanish commissioned by BSC to M47 Labs. It is organized in several of the behavioral categories outlined in the InstructGPT paper, including closed QA, open QA, general QA, classification, information extraction, summarization, creative writing and brainstorming.",
"### Supported Tasks and Leaderboards\n\n\nThis dataset is intended for instructing Foundational LLM for diverse downstream tasks",
"### Languages\n\n\nThis dataset is in Catalan (ca-CA).\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nThe dataset is provided in a DOLLY json format, with records to a single instruction-following instance and contains the category, the instruction, a context, if available, and the response.",
"### Data Fields\n\n\n* 'category': text string containing the type of instruction.\n* 'instruction': text string containing the prompt.\n* 'context': text string containing the information where the response is based on. These are only available for closed QA, information extraction and summarization.\n* 'answer': text string containing the response to the instruction.",
"### Data Splits\n\n\nWe provide canonical splits for Mentor\\_CA other than the categories used for generating the dataset.\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nMentor\\_CA is an open-source dataset of 10,175 records commissioned by BSC to M47 Labs to enable large language models to exhibit conversational interactivity. Annotators were asked to create prompt-response pairs in each of eight different instruction categories, including the seven described in the InstructGPT paper, as well as an open-ended free-form category (General QA). Annotators were allowed to use information from any source on the web to gather text fragments for the 'context' field in closed QA, information extraction and summarization, and were explicitly instructed to rephrase any response that came directly from the web. They were also asked to evenly distribute the number of questions with the number of topics, which are included in the topic list file. Examples of each behavior were provided to motivate the types of questions and instructions appropriate for each category.",
"### Source Data\n\n\n* Human-generated data: The annotators were asked to create prompt / response pairs in each of eight different instruction categories.\n* Web: For instruction categories that require a reference text (closed QA, information extraction and summarization) contributors selected passages from any website. No guidance was given to annotators as to how to select the target passages. If any response was taken from the web, it had to be rephrased.",
"#### Initial Data Collection and Normalization\n\n\nTo create a dataset, annotators were given a brief description of the annotation task, as well as format specifications for prompts and responses separately. Examples were also provided for each task.\n\n\nThe guidelines were concise by design to encourage a high rate of task completion and freedom of writing. However, care was taken to ensure that the categories were clear and that the boundaries between them did not overlap. For example, closed QA was formulated to include questions that focused on the 5W interrogative pronouns: Who (quién), What (qué), When (cuándo), Where (dónde), Why (por qué); Information extraction could be confused with summarization or closed QA, so the prompt had to include a clear order to extract some kind of information from the given reference text.",
"#### Who are the source language producers?\n\n\nThe data was generated entirely by annotators at M47 Labs. Text obtained from the web for the 'context' field was kept as is, while the 'response' field was rewritten.",
"### Annotations\n\n\nThe annotation guidelines for each of the categories are as follows:\n\n\n* Closed QA (closed\\_qa): Questions that can only be answered from a reference text. The annotators must provide a text from any web page and ask a question whose answer is found in the text.\n* Open QA (open\\_qa): Questions of common culture that can be answered without consulting any source or with a simple search on the Internet.\n* General QA (general\\_qa): Questions that are very general and do not necessarily have to be objective. In fact, it is desirable that they be as subjective as possible.\n* Classification (classification): Questions that serve to obtain classifications or categorizations of a list of items in different categories to which they may belong.\n* Information Extraction (inf\\_ext): Questions used to extract a list of data or information from a reference text.\n* Summarization (summarization): Questions to ask for a summary or synthesis of a text provided by the annotator.\n* Creative Writing (creative\\_wr): Questions that should be order-oriented to obtain an original text (a story, a letter, a song, an article, a poem, a narrative, etc.). original text (a story, a letter, a song, an article, a poem, a narrative, etc.).\n* Brainstorming (brainstorming): Questions to obtain a list of ideas or possible options to an issue.",
"#### Annotation process\n\n\nThe annotators were divided into two groups, with one group collecting reference text and asking a question, and the other group providing a response to the instruction.",
"#### Who are the annotators?\n\n\nWhile labels and text were produced by humans, no further information about the people or systems involved was provided when acquiring this resource.",
"### Personal and Sensitive Information\n\n\nThis dataset contains public information (e.g., some information from the web). To our knowledge, there are no private person’s personal identifiers or sensitive information.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\n[N/A]",
"### Discussion of Biases\n\n\n[N/A]",
"### Other Known Limitations\n\n\n* The contents of this dataset may reflect the bias, factual errors and topical focus found in the web.\n* Some annotators may not be native Spanish speakers.\n* Annotator demographics and subject matter may reflect the makeup of M47 Labs employees.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nM47 Labs was in charge of collecting and annotating the dataset. The Language Technologies Unit (URL) at the Barcelona Supercomputing Center (BSC) commissioned it to make it open-source.\n\n\nThis work was funded by the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA.",
"### Licensing Information\n\n\nThis dataset can be used for any purpose, whether academic or commercial, under the terms of the CC0 1.0\n\n\n[N/A]",
"### Contributions\n\n\nThanks to M47 Labs for the creation of this dataset."
] | [
"TAGS\n#task_categories-conversational #task_categories-question-answering #size_categories-1K<n<10K #language-Catalan #license-apache-2.0 #arxiv-2203.02155 #region-us \n",
"### Dataset Summary\n\n\nMentor\\_CA is the professionally-translated open source dataset of 10,175 instructions in Spanish commissioned by BSC to M47 Labs. It is organized in several of the behavioral categories outlined in the InstructGPT paper, including closed QA, open QA, general QA, classification, information extraction, summarization, creative writing and brainstorming.",
"### Supported Tasks and Leaderboards\n\n\nThis dataset is intended for instructing Foundational LLM for diverse downstream tasks",
"### Languages\n\n\nThis dataset is in Catalan (ca-CA).\n\n\nDataset Structure\n-----------------",
"### Data Instances\n\n\nThe dataset is provided in a DOLLY json format, with records to a single instruction-following instance and contains the category, the instruction, a context, if available, and the response.",
"### Data Fields\n\n\n* 'category': text string containing the type of instruction.\n* 'instruction': text string containing the prompt.\n* 'context': text string containing the information where the response is based on. These are only available for closed QA, information extraction and summarization.\n* 'answer': text string containing the response to the instruction.",
"### Data Splits\n\n\nWe provide canonical splits for Mentor\\_CA other than the categories used for generating the dataset.\n\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\nMentor\\_CA is an open-source dataset of 10,175 records commissioned by BSC to M47 Labs to enable large language models to exhibit conversational interactivity. Annotators were asked to create prompt-response pairs in each of eight different instruction categories, including the seven described in the InstructGPT paper, as well as an open-ended free-form category (General QA). Annotators were allowed to use information from any source on the web to gather text fragments for the 'context' field in closed QA, information extraction and summarization, and were explicitly instructed to rephrase any response that came directly from the web. They were also asked to evenly distribute the number of questions with the number of topics, which are included in the topic list file. Examples of each behavior were provided to motivate the types of questions and instructions appropriate for each category.",
"### Source Data\n\n\n* Human-generated data: The annotators were asked to create prompt / response pairs in each of eight different instruction categories.\n* Web: For instruction categories that require a reference text (closed QA, information extraction and summarization) contributors selected passages from any website. No guidance was given to annotators as to how to select the target passages. If any response was taken from the web, it had to be rephrased.",
"#### Initial Data Collection and Normalization\n\n\nTo create a dataset, annotators were given a brief description of the annotation task, as well as format specifications for prompts and responses separately. Examples were also provided for each task.\n\n\nThe guidelines were concise by design to encourage a high rate of task completion and freedom of writing. However, care was taken to ensure that the categories were clear and that the boundaries between them did not overlap. For example, closed QA was formulated to include questions that focused on the 5W interrogative pronouns: Who (quién), What (qué), When (cuándo), Where (dónde), Why (por qué); Information extraction could be confused with summarization or closed QA, so the prompt had to include a clear order to extract some kind of information from the given reference text.",
"#### Who are the source language producers?\n\n\nThe data was generated entirely by annotators at M47 Labs. Text obtained from the web for the 'context' field was kept as is, while the 'response' field was rewritten.",
"### Annotations\n\n\nThe annotation guidelines for each of the categories are as follows:\n\n\n* Closed QA (closed\\_qa): Questions that can only be answered from a reference text. The annotators must provide a text from any web page and ask a question whose answer is found in the text.\n* Open QA (open\\_qa): Questions of common culture that can be answered without consulting any source or with a simple search on the Internet.\n* General QA (general\\_qa): Questions that are very general and do not necessarily have to be objective. In fact, it is desirable that they be as subjective as possible.\n* Classification (classification): Questions that serve to obtain classifications or categorizations of a list of items in different categories to which they may belong.\n* Information Extraction (inf\\_ext): Questions used to extract a list of data or information from a reference text.\n* Summarization (summarization): Questions to ask for a summary or synthesis of a text provided by the annotator.\n* Creative Writing (creative\\_wr): Questions that should be order-oriented to obtain an original text (a story, a letter, a song, an article, a poem, a narrative, etc.). original text (a story, a letter, a song, an article, a poem, a narrative, etc.).\n* Brainstorming (brainstorming): Questions to obtain a list of ideas or possible options to an issue.",
"#### Annotation process\n\n\nThe annotators were divided into two groups, with one group collecting reference text and asking a question, and the other group providing a response to the instruction.",
"#### Who are the annotators?\n\n\nWhile labels and text were produced by humans, no further information about the people or systems involved was provided when acquiring this resource.",
"### Personal and Sensitive Information\n\n\nThis dataset contains public information (e.g., some information from the web). To our knowledge, there are no private person’s personal identifiers or sensitive information.\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\n[N/A]",
"### Discussion of Biases\n\n\n[N/A]",
"### Other Known Limitations\n\n\n* The contents of this dataset may reflect the bias, factual errors and topical focus found in the web.\n* Some annotators may not be native Spanish speakers.\n* Annotator demographics and subject matter may reflect the makeup of M47 Labs employees.\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\nM47 Labs was in charge of collecting and annotating the dataset. The Language Technologies Unit (URL) at the Barcelona Supercomputing Center (BSC) commissioned it to make it open-source.\n\n\nThis work was funded by the Departament de la Vicepresidència i de Polítiques Digitals i Territori de la Generalitat de Catalunya within the framework of Projecte AINA.",
"### Licensing Information\n\n\nThis dataset can be used for any purpose, whether academic or commercial, under the terms of the CC0 1.0\n\n\n[N/A]",
"### Contributions\n\n\nThanks to M47 Labs for the creation of this dataset."
] | [
61,
95,
29,
22,
51,
87,
36,
210,
105,
188,
56,
333,
40,
37,
56,
12,
13,
76,
91,
34,
19
] | [
"passage: TAGS\n#task_categories-conversational #task_categories-question-answering #size_categories-1K<n<10K #language-Catalan #license-apache-2.0 #arxiv-2203.02155 #region-us \n### Dataset Summary\n\n\nMentor\\_CA is the professionally-translated open source dataset of 10,175 instructions in Spanish commissioned by BSC to M47 Labs. It is organized in several of the behavioral categories outlined in the InstructGPT paper, including closed QA, open QA, general QA, classification, information extraction, summarization, creative writing and brainstorming.### Supported Tasks and Leaderboards\n\n\nThis dataset is intended for instructing Foundational LLM for diverse downstream tasks### Languages\n\n\nThis dataset is in Catalan (ca-CA).\n\n\nDataset Structure\n-----------------### Data Instances\n\n\nThe dataset is provided in a DOLLY json format, with records to a single instruction-following instance and contains the category, the instruction, a context, if available, and the response.### Data Fields\n\n\n* 'category': text string containing the type of instruction.\n* 'instruction': text string containing the prompt.\n* 'context': text string containing the information where the response is based on. These are only available for closed QA, information extraction and summarization.\n* 'answer': text string containing the response to the instruction.### Data Splits\n\n\nWe provide canonical splits for Mentor\\_CA other than the categories used for generating the dataset.\n\n\n\nDataset Creation\n----------------",
"passage: ### Curation Rationale\n\n\nMentor\\_CA is an open-source dataset of 10,175 records commissioned by BSC to M47 Labs to enable large language models to exhibit conversational interactivity. Annotators were asked to create prompt-response pairs in each of eight different instruction categories, including the seven described in the InstructGPT paper, as well as an open-ended free-form category (General QA). Annotators were allowed to use information from any source on the web to gather text fragments for the 'context' field in closed QA, information extraction and summarization, and were explicitly instructed to rephrase any response that came directly from the web. They were also asked to evenly distribute the number of questions with the number of topics, which are included in the topic list file. Examples of each behavior were provided to motivate the types of questions and instructions appropriate for each category.### Source Data\n\n\n* Human-generated data: The annotators were asked to create prompt / response pairs in each of eight different instruction categories.\n* Web: For instruction categories that require a reference text (closed QA, information extraction and summarization) contributors selected passages from any website. No guidance was given to annotators as to how to select the target passages. If any response was taken from the web, it had to be rephrased.#### Initial Data Collection and Normalization\n\n\nTo create a dataset, annotators were given a brief description of the annotation task, as well as format specifications for prompts and responses separately. Examples were also provided for each task.\n\n\nThe guidelines were concise by design to encourage a high rate of task completion and freedom of writing. However, care was taken to ensure that the categories were clear and that the boundaries between them did not overlap. For example, closed QA was formulated to include questions that focused on the 5W interrogative pronouns: Who (quién), What (qué), When (cuándo), Where (dónde), Why (por qué); Information extraction could be confused with summarization or closed QA, so the prompt had to include a clear order to extract some kind of information from the given reference text.#### Who are the source language producers?\n\n\nThe data was generated entirely by annotators at M47 Labs. Text obtained from the web for the 'context' field was kept as is, while the 'response' field was rewritten."
] |
cce18e200f96b63db4f71f11a04d313efae7aed9 |
# Hindi Instruct V1 Dataset
This dataset is curated by Sourab Mangrulkar. It was developed on top of `HuggingFaceH4/no_robots` dataset.
First, the dataset was translated using `ai4bharat/indictrans2-en-indic-1B` SoTA translation model developed by AI4Bharat.
Here, it is important to note that the sequence length limit is 256 for input and output sequences.
Hence, I split the individual sentences on full stop and create minibatches for translation and then stitch them back properly.
```python
def chunk_and_translate(content):
minibatch = content.split(".")
minibatch = [k for k in minibatch if len(k.strip())>0]
translations = batch_translate(minibatch, "eng_Latn", "hin_Deva", en_indic_model, en_indic_tokenizer)
translated_content = " ".join(translations)
return translated_content
def preprocess(sample):
for turn in sample["messages"]:
turn["content"] = chunk_and_translate(turn["content"])
return {"messages": sample["messages"]}
dataset = load_dataset("HuggingFaceH4/no_robots")
dataset
dataset = dataset.map(
preprocess,
batched=False
)
```
Next, inspired by the demo given by the Openhathi team [here](https://youtu.be/WKfVzJSDAd8), I did the following:
## Remove Coding samples
After manually going through the coding samples and trying things like having user messages in Hindi while the assistant messages with code snippets in English,
I found that it would still not work because users often give code snippets and prompt the assistant to help with debugging.
As this would be noise in Hindi instruction tuning, I simply removed all the coding examples.
## Add English and Hinglish instructions to tell the model to reply in Hindi for Engligh user messages and add the CoT prompt `Frist write in English and then translate to Hindi`
With probability of 0.2, add the prompts `Hindi mein jawab dena.`/`Reply in Hindi.`/`Summarize in Hindi` to the end of user messages in English with the assistant replies in Hindi.
This would help when user writes something in English and prompt the model to reply in Hindi.
For Chain of Thoughts ability, the OpenHathi team used the prompt "Frist write in English and then translate to Hindi" which leads to better responses.
To enable this, we suffix the prompt `Frist write in English and then translate to Hindi`/`Pehle English mein likhna, phir Hindi mein translate karna.` to user messages in English and
processing the assistant message in the template `{english_reply}}\nTranslation of the above into Hindi:\n{translated_hindi_reply}`.
```python
hindi_reply_prompt = "Hindi mein jawab dena."
english_reply_prompt = "{prefix} in Hindi."
cot_prompt = "Frist write in English and then translate to Hindi"
cot_hindi_prompt = "Pehle English mein likhna, phir Hindi mein translate karna."
def add_system_prompt(sample):
p = 0.2
language_p = 0.5
suffix_hindi_prompt = bernoulli_sample(p)
suffix_cot_prompt = bernoulli_sample(p)
use_hindi_reply_prompt = bernoulli_sample(language_p)
use_hindi_cot_prompt = bernoulli_sample(language_p)
for i, turn in enumerate(sample["hindi_messages"]):
if suffix_hindi_prompt:
if use_hindi_reply_prompt and i%2==0:
turn["content"] = f"{sample['messages'][i]['content']} {hindi_reply_prompt}"
elif i%2==0:
prefix = "Summarize" if sample["category"] == "Summarize" else "Reply"
turn["content"] = f"{sample['messages'][i]['content']} {english_reply_prompt.format(prefix=prefix)}"
elif suffix_cot_prompt:
if use_hindi_cot_prompt and i%2==0:
turn["content"] = f"{sample['messages'][i]['content']} {cot_hindi_prompt}"
elif i%2==0:
turn["content"] = f"{sample['messages'][i]['content']} {cot_prompt}"
else:
turn["content"] = f"{sample['messages'][i]['content']}\nTranslation of the above into Hindi:\n{turn['content']}"
return {"hindi_messages": sample["hindi_messages"]}
orig_dataset = orig_dataset.map(
add_system_prompt,
batched=False
)
orig_dataset
```
Example outputs:
```bash
[{'content': 'Write a funny, short story about someone who will stop at nothing to go green and reduce their carbon footprint. \n Pehle English mein likhna, phir Hindi mein translate karna.', 'role': 'user'}, {'content': 'Garry has a real green thumb, and taking care of the environment is extremely important to him. He takes every possible step to cut his carbon footprint, and it inspires and mystifies everyone around him in equal measure. Each night before he goes to bed, he cuts off the power to his house so that his electronic devices don’t draw idle power. \n\nAlmost all of his appliances are quite old, as Garry wants to get as much use as possible out of them before contributing to new pollution-generating production. The only exceptions are his new, water-efficient dishwasher and his solar-powered hot water tank. \n\nThe Homeowner’s Association hates Garry because he grows corn on his front lawn. All of his food is locally sourced as much as possible, and he commutes to work on a bike that doesn’t look like it should work. He built it himself out of recycled parts, and it seems to jump and rock like Fred Flinstone’s car. Yet, it stays in one piece and carries him faithfully to work without contributing to pollution. All in all, there’s nothing Garry won’t do to keep his carbon footprint small. \nTranslation of the above into Hindi:\nगैरी का अंगूठा असली में हरा है और पर्यावरण की देखभाल करना उसके लिए बेहद महत्वपूर्ण है। वह अपने कार्बन पदचिह्न को कम करने के लिए हर संभव कदम उठाता है, और यह उसके आसपास सभी लोगों को समान रूप से प्रेरित करता है और रहस्यमयी बनाता है। हर रात सोने से पहले वह अपने घर की बिजली काट देता है ताकि उसके इलेक्ट्रॉनिक उपकरण बेकार बिजली न खींच सकें। उनके लगभग सभी उपकरण काफी पुराने हैं, क्योंकि गैरी नए प्रदूषण पैदा करने वाले उत्पादन में योगदान देने से पहले उनका अधिक से अधिक उपयोग करना चाहते हैं। केवल अपवाद उनके नए, पानी-कुशल डिशवॉशर और उनकी सौर ऊर्जा से चलने वाली गर्म पानी की टंकी हैं। मकान मालिक संघ गैरी से नफरत करता है क्योंकि वह अपने सामने के लॉन में मकई उगाता है। उनका सारा खाना जितना संभव हो सके स्थानीय रूप से प्राप्त किया जाता है, और वह एक ऐसी बाइक पर काम करने के लिए आता है जो ऐसा नहीं लगता कि यह काम करना चाहिए। उन्होंने इसे पुनर्नवीनीकरण भागों से खुद बनाया था, और यह फ्रेड फ्लिनस्टोन की कार की तरह कूदता हुआ प्रतीत होता है। फिर भी, यह एक टुकड़े में रहता है और उसे प्रदूषण में योगदान दिए बिना काम करने के लिए ईमानदारी से ले जाता है। कुल मिलाकर, ऐसा कुछ भी नहीं है जो गैरी अपने कार्बन पदचिह्न को छोटा रखने के लिए न करे।', 'role': 'assistant'}]
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[{'content': 'Help write a letter of 100 -200 words to my future self for Kyra, reflecting on her goals and aspirations. Frist write in English and then translate to Hindi', 'role': 'user'}, {'content': "Dear Future Self,\n\nI hope you're happy and proud of what you've achieved. As I write this, I'm excited to think about our goals and how far you've come. One goal was to be a machine learning engineer. I hope you've worked hard and become skilled in this field. Keep learning and innovating. Traveling was important to us. I hope you've seen different places and enjoyed the beauty of our world. Remember the memories and lessons. Starting a family mattered to us. If you have kids, treasure every moment. Be patient, loving, and grateful for your family.\n\nTake care of yourself. Rest, reflect, and cherish the time you spend with loved ones. Remember your dreams and celebrate what you've achieved. Your determination brought you here. I'm excited to see the person you've become, the impact you've made, and the love and joy in your life. Embrace opportunities and keep dreaming big.\n\nWith love,\nKyra\nTranslation of the above into Hindi:\nप्रिय भविष्य स्वयं, मुझे आशा है कि आपने जो हासिल किया है उससे आप खुश और गर्वित हैं। जब मैं यह लिख रहा हूँ, तो मैं अपने लक्ष्यों के बारे में सोचने के लिए उत्साहित हूं और सोचता हूँ कि आप कितनी दूर आ गए हैं। एक लक्ष्य मशीन लर्निंग इंजीनियर बनना था। मुझे आशा है कि आपने कड़ी मेहनत की होगी और इस क्षेत्र में कुशल बनेंगे। सीखते रहें और नवाचार करते रहें। हमारे लिए यात्रा करना महत्वपूर्ण था। मुझे आशा है कि आपने विभिन्न स्थानों को देखा होगा और हमारी दुनिया की सुंदरता का आनंद लिया होगा। यादों और सबक को याद रखें। परिवार शुरू करना हमारे लिए मायने रखता था। अगर आपके बच्चे हैं, तो हर पल को संजो कर रखें। अपने परिवार के लिए धैर्य, प्यार और आभारी रहें। अपना ख्याल रखें। आराम करें, सोचें और अपने प्रियजनों के साथ बिताए समय को संजो कर रखें। अपने सपनों को याद रखें और जो आपने हासिल किया है उसका जश्न मनाएं। आपका दृढ़ संकल्प आपको यहाँ ले आया है। आप जिस व्यक्ति के रूप में उभरे हैं, आपने जो प्रभाव डाला है और आपके जीवन में प्यार और खुशी को देखकर मैं उत्साहित हूं। अवसरों को अपनाएँ और बड़े सपने देखते रहें। प्यार के साथ, कायरा।", 'role': 'assistant'}]
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[{'content': "Answer the following questions using the passage below:\nWho was Harry Potter written by?\nWho originally published the Harry Potter series?\n\nHarry Potter is a series of seven fantasy novels written by British author J. K. Rowling. The novels chronicle the lives of a young wizard, Harry Potter, and his friends Hermione Granger and Ron Weasley, all of whom are students at Hogwarts School of Witchcraft and Wizardry. The main story arc concerns Harry's conflict with Lord Voldemort, a dark wizard who intends to become immortal, overthrow the wizard governing body known as the Ministry of Magic and subjugate all wizards and Muggles (non-magical people).\n\nThe series was originally published in English by Bloomsbury in the United Kingdom and Scholastic Press in the United States. All versions around the world are printed by Grafica Veneta in Italy.[1] A series of many genres, including fantasy, drama, coming-of-age fiction, and the British school story (which includes elements of mystery, thriller, adventure, horror, and romance), the world of Harry Potter explores numerous themes and includes many cultural meanings and references.[2] According to Rowling, the main theme is death.[3] Other major themes in the series include prejudice, corruption, and madness.[4] Hindi mein jawab dena.", 'role': 'user'}, {'content': 'हैरी पॉटर ब्रिटिश लेखक जे. द्वारा लिखा गया था। के. रालिंग यह श्रृंखला मूल रूप से ब्रिटेन में ब्लूम्सबरी और अमेरिका में स्कॉलास्टिक प्रेस द्वारा प्रकाशित की गई थी।', 'role': 'assistant'}]
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[{'content': 'Write a summary of this article about Euripides by reducing it to less than 90 words. Change the dating system from B.C. to BCE.\n\nThe Ancient Greeks, particularly the Ancient Athenians, are given the credit for inventing theater itself. As a result of that, theater became part of the culture, as well as everything that went along with it. There are theaters located throughout Ancient Greece. Also as a result of this fact, playwrights began to spring up all over Greece. One of the most well known of these is Euripides, who is especially noted for his tragedies. He is also one of the only playwrights whose work has actually survived. Here is some more information about this famous playwright from Ancient Greece:\n\nInformation About Euripides\n\nEuripides as born in 480 B.C. in Salamis, which is an island in the Saronic Gulf in Greece. It is interesting to note that he was born on the same day as the Battle of Salamis, where the ultimate victory of the Greeks prevented an invasion by the Persian Empire. He died in 406 B.C. in the city-state of Macedonia. His mother’s name was Cleito and his father’s name was Mnesarchus. His father insisted that he should be an athlete. Although his parents valued athletics, however, he wasn’t restricted in his education. He also learned artistic pursuits, such as painting, as well as philosophy from masters such as Anaxagorus. He was also a dancer and torch bearer at the rites of Apollo Zosterius. Later on, it was clear that he had a knack for the stage and he eventually began writing the plays he was most famous for.\n\n\nHis personal life, however, wasn’t as successful as his professional one. He was actually married twice and in both instances, his wives weren’t faithful to him. He also had three sons from one of these marriages. After this happened, he moved to a cave in Salamis because he wanted to live alone. Referred to as the Cave of Euripides, after his death his followers started a shrine in his honor.\n\nWorks of Euripides\n\nThere is actually some debate as to how many plays there are that were written by Euripides that we know of. Some sources say that there are 95, others say that there are 92. One thing is certain, however, is that his impact on theater is vast and his plays are still being performed today. Of these plays, a much smaller number survived in its complete form. Only about eighteen or nineteen of his plays had been found in their entirety. Besides the fact that his plays have been entertaining people for centuries, his work has also influenced playwrights through the ages. In other words, his artistic legacy has survived the ages.\n\nEuripides was also a contemporary of two other famous playwrights. Sophocles’ first play deputed thirteen years before his, and Aeschylus’s masterwork, Oresteia, debuted three years before Euripides’ first play was released.\n\nAs you can see, Euripedes has had a big impact on theater not only in Ancient Greece, but in the whole world. Sure, theater was invented in Ancient Greece, but the works of people like Euripides is everlasting and they’re still being performed today. Summarize in Hindi.', 'role': 'user'}, {'content': 'प्राचीन यूनानी नाटककार यूरिपिड्स का जन्म 480 ईसा पूर्व में ग्रीक द्वीप सलामिस में हुआ था। युवावस्था में उन्होंने एथलेटिक्स, दर्शन और चित्रकला का अध्ययन किया। वह भगवान अपोलो के संस्कारों के दौरान एक मशाल वाहक और नर्तक थे। हालाँकि इस बात पर सवाल हैं कि उन्होंने कितने नाटक लिखे, उनमें से 18 या 19 आज भी पूर्ण रूप में जीवित हैं। अपनी दूसरी असफल शादी के बाद, उन्होंने सलामिस पर एक गुफा में निवास किया जिसे अब यूरिपिड्स की गुफा कहा जाता है। 406 ईसा पूर्व में उनकी मृत्यु हो गई जो अब मैसेडोनिया है।', 'role': 'assistant'}]
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
[{'content': 'Can foxes fit down rabbit burrows? Reply in Hindi.', 'role': 'user'}, {'content': 'लोमड़ी खरगोश के बिलों में फिट नहीं हो सकते हैं, लेकिन यह उन्हें कोशिश करने से नहीं रोकता है। एक लोमड़ी जितना संभव हो उतना नीचे जाएगी और फिर खरगोश को पकड़ने और उसे बाहर निकालने के प्रयास में अपने अर्ध-वापस लेने योग्य पंजों का उपयोग करके खुदाई करेगी।', 'role': 'assistant'}]
----------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------------
```
## Hinglish (Code-Mixing and Transliteration) for colloquial usage
We use the below prompts with the following datasets `HinGE` and `https://github.com/google-research-datasets/Hinglish-TOP-Dataset` for adding samples that does different combinations of translations between English, Hindi and Hinglish.
```python
english_to_hinglish_prompt_en = "Translate the following from English into Hinglish."
hingligh_to_english_prompt_en = "Translate the following from Hinglish into English."
english_to_hinglish_prompt_hn = "English se Hinglish mein translate kare."
hingligh_to_english_prompt_hn = "Hinglish se English mein translate kare."
hindi_to_hinglish_prompt_hi = "निम्नलिखित का हिंदी से हिंग्लिश में अनुवाद करें।"
hingligh_to_hindi_prompt_hi = "निम्नलिखित का हिंग्लिश से हिंदी में अनुवाद करें।"
hindi_to_hinglish_prompt_hn = "Hindi se Hinglish mein translate kare."
hingligh_to_hindi_prompt_hn = "Hinglish se Hindi mein translate kare."
def format_to_norobots(sample):
p = 0.25
language_p = 0.5
ranom_num = random.random()
use_hn_prompt = bernoulli_sample(language_p)
hindi_messages = []
if ranom_num<= 0.25:
#en_to_hn
if use_hn_prompt:
user_message = {"content":f"{english_to_hinglish_prompt_hn}\n{sample['English']}", "role": "user"}
else:
user_message = {"content":f"{english_to_hinglish_prompt_en}\n{sample['English']}", "role": "user"}
asst_message = {"content":sample["Hinglish"], "role": "assistant"}
hindi_messages.extend([user_message, asst_message])
elif ranom_num<= 0.5:
#hn_to_en
if use_hn_prompt:
user_message = {"content":f"{hingligh_to_english_prompt_hn}\n{sample['Hinglish']}", "role": "user"}
else:
user_message = {"content":f"{hingligh_to_english_prompt_en}\n{sample['Hinglish']}", "role": "user"}
asst_message = {"content":sample["English"], "role": "assistant"}
hindi_messages.extend([user_message, asst_message])
elif ranom_num<= 0.75:
#hi_to_hn
if use_hn_prompt:
user_message = {"content":f"{hindi_to_hinglish_prompt_hn}\n{sample['Hindi']}", "role": "user"}
else:
user_message = {"content":f"{hindi_to_hinglish_prompt_hi}\n{sample['Hindi']}", "role": "user"}
asst_message = {"content":sample["Hinglish"], "role": "assistant"}
hindi_messages.extend([user_message, asst_message])
else:
#hn_to_hi
if use_hn_prompt:
user_message = {"content":f"{hingligh_to_hindi_prompt_hn}\n{sample['Hinglish']}", "role": "user"}
else:
user_message = {"content":f"{hingligh_to_hindi_prompt_hi}\n{sample['Hinglish']}", "role": "user"}
asst_message = {"content":sample["Hindi"], "role": "assistant"}
hindi_messages.extend([user_message, asst_message])
return {"category": "Transliteration and Code Mixing", "hindi_messages": hindi_messages}
```
Example output:
```bash
[{'content': 'Hinglish se English mein translate kare.\nMere liye reminder set karo to wake up at 6:30 am tomorrow.',
'role': 'user'},
{'content': 'Set a reminder for me to wake up at 630 am tomorrow.',
'role': 'assistant'}]
```
## Add the Hindi specific System prompt
With 0.2 probability, we add the system prompt `You are a native Hindi speaker who can converse at expert level in both Hindi and colloquial Hinglish`.
```python
system_prompt = "You are a native Hindi speaker who can converse at expert level in both Hindi and colloquial Hinglish."
def add_system_prompt(sample):
p = 0.2
add_system_message = bernoulli_sample(p)
first_message = sample["hindi_messages"][0]
if add_system_message:
if first_message["role"] == "system":
first_message["content"] = f"{first_message['content']} {system_prompt}"
else:
system_message = {'content': system_prompt, 'role': 'system'}
sample["hindi_messages"].insert(0, system_message)
return {"hindi_messages": sample["hindi_messages"]}
```
Example output:
```bash
{'category': 'Generation',
'hindi_messages': [{'content': 'You are a native Hindi speaker who can converse at expert level in both Hindi and colloquial Hinglish.',
'role': 'system'},
{'content': 'Help write a letter of 100 -200 words to my future self for Kyra, reflecting on her goals and aspirations. Frist write in English and then translate to Hindi',
'role': 'user'},
{'content': "Dear Future Self,\n\nI hope you're happy and proud of what you've achieved. As I write this, I'm excited to think about our goals and how far you've come. One goal was to be a machine learning engineer. I hope you've worked hard and become skilled in this field. Keep learning and innovating. Traveling was important to us. I hope you've seen different places and enjoyed the beauty of our world. Remember the memories and lessons. Starting a family mattered to us. If you have kids, treasure every moment. Be patient, loving, and grateful for your family.\n\nTake care of yourself. Rest, reflect, and cherish the time you spend with loved ones. Remember your dreams and celebrate what you've achieved. Your determination brought you here. I'm excited to see the person you've become, the impact you've made, and the love and joy in your life. Embrace opportunities and keep dreaming big.\n\nWith love,\nKyra\nTranslation of the above into Hindi:\nप्रिय भविष्य स्वयं, मुझे आशा है कि आपने जो हासिल किया है उससे आप खुश और गर्वित हैं। जब मैं यह लिख रहा हूँ, तो मैं अपने लक्ष्यों के बारे में सोचने के लिए उत्साहित हूं और सोचता हूँ कि आप कितनी दूर आ गए हैं। एक लक्ष्य मशीन लर्निंग इंजीनियर बनना था। मुझे आशा है कि आपने कड़ी मेहनत की होगी और इस क्षेत्र में कुशल बनेंगे। सीखते रहें और नवाचार करते रहें। हमारे लिए यात्रा करना महत्वपूर्ण था। मुझे आशा है कि आपने विभिन्न स्थानों को देखा होगा और हमारी दुनिया की सुंदरता का आनंद लिया होगा। यादों और सबक को याद रखें। परिवार शुरू करना हमारे लिए मायने रखता था। अगर आपके बच्चे हैं, तो हर पल को संजो कर रखें। अपने परिवार के लिए धैर्य, प्यार और आभारी रहें। अपना ख्याल रखें। आराम करें, सोचें और अपने प्रियजनों के साथ बिताए समय को संजो कर रखें। अपने सपनों को याद रखें और जो आपने हासिल किया है उसका जश्न मनाएं। आपका दृढ़ संकल्प आपको यहाँ ले आया है। आप जिस व्यक्ति के रूप में उभरे हैं, आपने जो प्रभाव डाला है और आपके जीवन में प्यार और खुशी को देखकर मैं उत्साहित हूं। अवसरों को अपनाएँ और बड़े सपने देखते रहें। प्यार के साथ, कायरा।",
'role': 'assistant'}]}
``` | smangrul/hindi_instruct_v1 | [
"task_categories:text-generation",
"size_categories:10K<n<100K",
"language:en",
"language:hi",
"license:mit",
"region:us"
] | 2023-12-21T15:13:33+00:00 | {"language": ["en", "hi"], "license": "mit", "size_categories": ["10K<n<100K"], "task_categories": ["text-generation"], "dataset_info": {"features": [{"name": "category", "dtype": "string"}, {"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 32114704.0, "num_examples": 20215}, {"name": "test", "num_bytes": 3408295.0, "num_examples": 7788}], "download_size": 14542561, "dataset_size": 35522999.0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]} | 2023-12-23T17:21:36+00:00 | [] | [
"en",
"hi"
] | TAGS
#task_categories-text-generation #size_categories-10K<n<100K #language-English #language-Hindi #license-mit #region-us
|
# Hindi Instruct V1 Dataset
This dataset is curated by Sourab Mangrulkar. It was developed on top of 'HuggingFaceH4/no_robots' dataset.
First, the dataset was translated using 'ai4bharat/indictrans2-en-indic-1B' SoTA translation model developed by AI4Bharat.
Here, it is important to note that the sequence length limit is 256 for input and output sequences.
Hence, I split the individual sentences on full stop and create minibatches for translation and then stitch them back properly.
Next, inspired by the demo given by the Openhathi team here, I did the following:
## Remove Coding samples
After manually going through the coding samples and trying things like having user messages in Hindi while the assistant messages with code snippets in English,
I found that it would still not work because users often give code snippets and prompt the assistant to help with debugging.
As this would be noise in Hindi instruction tuning, I simply removed all the coding examples.
## Add English and Hinglish instructions to tell the model to reply in Hindi for Engligh user messages and add the CoT prompt 'Frist write in English and then translate to Hindi'
With probability of 0.2, add the prompts 'Hindi mein jawab dena.'/'Reply in Hindi.'/'Summarize in Hindi' to the end of user messages in English with the assistant replies in Hindi.
This would help when user writes something in English and prompt the model to reply in Hindi.
For Chain of Thoughts ability, the OpenHathi team used the prompt "Frist write in English and then translate to Hindi" which leads to better responses.
To enable this, we suffix the prompt 'Frist write in English and then translate to Hindi'/'Pehle English mein likhna, phir Hindi mein translate karna.' to user messages in English and
processing the assistant message in the template '{english_reply}}\nTranslation of the above into Hindi:\n{translated_hindi_reply}'.
Example outputs:
## Hinglish (Code-Mixing and Transliteration) for colloquial usage
We use the below prompts with the following datasets 'HinGE' and 'URL for adding samples that does different combinations of translations between English, Hindi and Hinglish.
Example output:
## Add the Hindi specific System prompt
With 0.2 probability, we add the system prompt 'You are a native Hindi speaker who can converse at expert level in both Hindi and colloquial Hinglish'.
Example output:
| [
"# Hindi Instruct V1 Dataset\n\nThis dataset is curated by Sourab Mangrulkar. It was developed on top of 'HuggingFaceH4/no_robots' dataset. \nFirst, the dataset was translated using 'ai4bharat/indictrans2-en-indic-1B' SoTA translation model developed by AI4Bharat. \nHere, it is important to note that the sequence length limit is 256 for input and output sequences. \nHence, I split the individual sentences on full stop and create minibatches for translation and then stitch them back properly.\n\n\n\nNext, inspired by the demo given by the Openhathi team here, I did the following:",
"## Remove Coding samples\n\nAfter manually going through the coding samples and trying things like having user messages in Hindi while the assistant messages with code snippets in English,\nI found that it would still not work because users often give code snippets and prompt the assistant to help with debugging. \nAs this would be noise in Hindi instruction tuning, I simply removed all the coding examples.",
"## Add English and Hinglish instructions to tell the model to reply in Hindi for Engligh user messages and add the CoT prompt 'Frist write in English and then translate to Hindi'\n\nWith probability of 0.2, add the prompts 'Hindi mein jawab dena.'/'Reply in Hindi.'/'Summarize in Hindi' to the end of user messages in English with the assistant replies in Hindi.\nThis would help when user writes something in English and prompt the model to reply in Hindi.\n\nFor Chain of Thoughts ability, the OpenHathi team used the prompt \"Frist write in English and then translate to Hindi\" which leads to better responses.\nTo enable this, we suffix the prompt 'Frist write in English and then translate to Hindi'/'Pehle English mein likhna, phir Hindi mein translate karna.' to user messages in English and \nprocessing the assistant message in the template '{english_reply}}\\nTranslation of the above into Hindi:\\n{translated_hindi_reply}'.\n\n\n\nExample outputs:",
"## Hinglish (Code-Mixing and Transliteration) for colloquial usage\n\nWe use the below prompts with the following datasets 'HinGE' and 'URL for adding samples that does different combinations of translations between English, Hindi and Hinglish.\n\n\n\nExample output:",
"## Add the Hindi specific System prompt \nWith 0.2 probability, we add the system prompt 'You are a native Hindi speaker who can converse at expert level in both Hindi and colloquial Hinglish'.\n\n\n\nExample output:"
] | [
"TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-English #language-Hindi #license-mit #region-us \n",
"# Hindi Instruct V1 Dataset\n\nThis dataset is curated by Sourab Mangrulkar. It was developed on top of 'HuggingFaceH4/no_robots' dataset. \nFirst, the dataset was translated using 'ai4bharat/indictrans2-en-indic-1B' SoTA translation model developed by AI4Bharat. \nHere, it is important to note that the sequence length limit is 256 for input and output sequences. \nHence, I split the individual sentences on full stop and create minibatches for translation and then stitch them back properly.\n\n\n\nNext, inspired by the demo given by the Openhathi team here, I did the following:",
"## Remove Coding samples\n\nAfter manually going through the coding samples and trying things like having user messages in Hindi while the assistant messages with code snippets in English,\nI found that it would still not work because users often give code snippets and prompt the assistant to help with debugging. \nAs this would be noise in Hindi instruction tuning, I simply removed all the coding examples.",
"## Add English and Hinglish instructions to tell the model to reply in Hindi for Engligh user messages and add the CoT prompt 'Frist write in English and then translate to Hindi'\n\nWith probability of 0.2, add the prompts 'Hindi mein jawab dena.'/'Reply in Hindi.'/'Summarize in Hindi' to the end of user messages in English with the assistant replies in Hindi.\nThis would help when user writes something in English and prompt the model to reply in Hindi.\n\nFor Chain of Thoughts ability, the OpenHathi team used the prompt \"Frist write in English and then translate to Hindi\" which leads to better responses.\nTo enable this, we suffix the prompt 'Frist write in English and then translate to Hindi'/'Pehle English mein likhna, phir Hindi mein translate karna.' to user messages in English and \nprocessing the assistant message in the template '{english_reply}}\\nTranslation of the above into Hindi:\\n{translated_hindi_reply}'.\n\n\n\nExample outputs:",
"## Hinglish (Code-Mixing and Transliteration) for colloquial usage\n\nWe use the below prompts with the following datasets 'HinGE' and 'URL for adding samples that does different combinations of translations between English, Hindi and Hinglish.\n\n\n\nExample output:",
"## Add the Hindi specific System prompt \nWith 0.2 probability, we add the system prompt 'You are a native Hindi speaker who can converse at expert level in both Hindi and colloquial Hinglish'.\n\n\n\nExample output:"
] | [
42,
153,
89,
239,
64,
47
] | [
"passage: TAGS\n#task_categories-text-generation #size_categories-10K<n<100K #language-English #language-Hindi #license-mit #region-us \n# Hindi Instruct V1 Dataset\n\nThis dataset is curated by Sourab Mangrulkar. It was developed on top of 'HuggingFaceH4/no_robots' dataset. \nFirst, the dataset was translated using 'ai4bharat/indictrans2-en-indic-1B' SoTA translation model developed by AI4Bharat. \nHere, it is important to note that the sequence length limit is 256 for input and output sequences. \nHence, I split the individual sentences on full stop and create minibatches for translation and then stitch them back properly.\n\n\n\nNext, inspired by the demo given by the Openhathi team here, I did the following:## Remove Coding samples\n\nAfter manually going through the coding samples and trying things like having user messages in Hindi while the assistant messages with code snippets in English,\nI found that it would still not work because users often give code snippets and prompt the assistant to help with debugging. \nAs this would be noise in Hindi instruction tuning, I simply removed all the coding examples."
] |
b68c912ca92d3c938592fe871a35dd5db176e702 |
# Dataset full name: Small Lynx Dataset
**Number of images**: 151 photos of lynxes
**Description**: The dataset contains a set of 151 images of lynxes of various sizes and poses. The images capture lynxes both in the wild and in captivity. Image quality varies depending on the source. The photographs show different angles of lynxes, their colors and features.
**Data sources**: The dataset was collected from open sources, including images from various online resources, as well as photographs provided by users.
**Purpose**: This dataset is intended for training neural networks and computer vision algorithms for classification or recognition problems of lynxes.
**Note**: Photos can contain different poses, lighting, and backgrounds, making this a diverse dataset for model training. | Barishni-blinchik/Floppa-dataset-small-v2 | [
"license:apache-2.0",
"region:us"
] | 2023-12-21T16:13:24+00:00 | {"license": "apache-2.0"} | 2024-01-15T10:00:50+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
|
# Dataset full name: Small Lynx Dataset
Number of images: 151 photos of lynxes
Description: The dataset contains a set of 151 images of lynxes of various sizes and poses. The images capture lynxes both in the wild and in captivity. Image quality varies depending on the source. The photographs show different angles of lynxes, their colors and features.
Data sources: The dataset was collected from open sources, including images from various online resources, as well as photographs provided by users.
Purpose: This dataset is intended for training neural networks and computer vision algorithms for classification or recognition problems of lynxes.
Note: Photos can contain different poses, lighting, and backgrounds, making this a diverse dataset for model training. | [
"# Dataset full name: Small Lynx Dataset\nNumber of images: 151 photos of lynxes\n\nDescription: The dataset contains a set of 151 images of lynxes of various sizes and poses. The images capture lynxes both in the wild and in captivity. Image quality varies depending on the source. The photographs show different angles of lynxes, their colors and features.\n\nData sources: The dataset was collected from open sources, including images from various online resources, as well as photographs provided by users.\n\nPurpose: This dataset is intended for training neural networks and computer vision algorithms for classification or recognition problems of lynxes.\n\nNote: Photos can contain different poses, lighting, and backgrounds, making this a diverse dataset for model training."
] | [
"TAGS\n#license-apache-2.0 #region-us \n",
"# Dataset full name: Small Lynx Dataset\nNumber of images: 151 photos of lynxes\n\nDescription: The dataset contains a set of 151 images of lynxes of various sizes and poses. The images capture lynxes both in the wild and in captivity. Image quality varies depending on the source. The photographs show different angles of lynxes, their colors and features.\n\nData sources: The dataset was collected from open sources, including images from various online resources, as well as photographs provided by users.\n\nPurpose: This dataset is intended for training neural networks and computer vision algorithms for classification or recognition problems of lynxes.\n\nNote: Photos can contain different poses, lighting, and backgrounds, making this a diverse dataset for model training."
] | [
14,
165
] | [
"passage: TAGS\n#license-apache-2.0 #region-us \n# Dataset full name: Small Lynx Dataset\nNumber of images: 151 photos of lynxes\n\nDescription: The dataset contains a set of 151 images of lynxes of various sizes and poses. The images capture lynxes both in the wild and in captivity. Image quality varies depending on the source. The photographs show different angles of lynxes, their colors and features.\n\nData sources: The dataset was collected from open sources, including images from various online resources, as well as photographs provided by users.\n\nPurpose: This dataset is intended for training neural networks and computer vision algorithms for classification or recognition problems of lynxes.\n\nNote: Photos can contain different poses, lighting, and backgrounds, making this a diverse dataset for model training."
] |
53c9ab0738725bfe9464953a0eede1b37201ecb1 | RAG generated german Code conversations from the ise-uiuc/Magicoder-OSS-Instruct-75K dataset generated with Mixtral (4bit quant)
RAG erzeugte Konversationen in deutsch mit Programmierkontext. Erster Prototyp -> mehr Daten können erzeugt werden.
Still a WIP: Not yet sufficientlly stable conversations | SebastianBodza/Coding_Conversations_de_ger | [
"region:us"
] | 2023-12-21T16:22:24+00:00 | {} | 2023-12-22T12:23:34+00:00 | [] | [] | TAGS
#region-us
| RAG generated german Code conversations from the ise-uiuc/Magicoder-OSS-Instruct-75K dataset generated with Mixtral (4bit quant)
RAG erzeugte Konversationen in deutsch mit Programmierkontext. Erster Prototyp -> mehr Daten können erzeugt werden.
Still a WIP: Not yet sufficientlly stable conversations | [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
dbf4fbf03cbc3e610d3452b26eb8e94721eb373f |
# Dataset Card for Evaluation run of fblgit/UNA-SOLAR-10.7B-Instruct-v1.0
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [fblgit/UNA-SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/fblgit/UNA-SOLAR-10.7B-Instruct-v1.0) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_fblgit__UNA-SOLAR-10.7B-Instruct-v1.0",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-23T18:01:55.380326](https://huggingface.co/datasets/open-llm-leaderboard/details_fblgit__UNA-SOLAR-10.7B-Instruct-v1.0/blob/main/results_2023-12-23T18-01-55.380326.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6645232088105464,
"acc_stderr": 0.03161209065220985,
"acc_norm": 0.6655297512119899,
"acc_norm_stderr": 0.03225370835754654,
"mc1": 0.5703794369645043,
"mc1_stderr": 0.01732923458040909,
"mc2": 0.7252131907680065,
"mc2_stderr": 0.01501805188232875
},
"harness|arc:challenge|25": {
"acc": 0.6825938566552902,
"acc_stderr": 0.013602239088038167,
"acc_norm": 0.7073378839590444,
"acc_norm_stderr": 0.013295916103619427
},
"harness|hellaswag|10": {
"acc": 0.7103166699860586,
"acc_stderr": 0.004526883021027635,
"acc_norm": 0.8831905994821748,
"acc_norm_stderr": 0.003205366051421356
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.41,
"acc_stderr": 0.049431107042371025,
"acc_norm": 0.41,
"acc_norm_stderr": 0.049431107042371025
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6074074074074074,
"acc_stderr": 0.04218506215368879,
"acc_norm": 0.6074074074074074,
"acc_norm_stderr": 0.04218506215368879
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.743421052631579,
"acc_stderr": 0.0355418036802569,
"acc_norm": 0.743421052631579,
"acc_norm_stderr": 0.0355418036802569
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.75,
"acc_stderr": 0.04351941398892446,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04351941398892446
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6716981132075471,
"acc_stderr": 0.02890159361241178,
"acc_norm": 0.6716981132075471,
"acc_norm_stderr": 0.02890159361241178
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7777777777777778,
"acc_stderr": 0.03476590104304134,
"acc_norm": 0.7777777777777778,
"acc_norm_stderr": 0.03476590104304134
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.47,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.47,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.29,
"acc_stderr": 0.04560480215720684,
"acc_norm": 0.29,
"acc_norm_stderr": 0.04560480215720684
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6589595375722543,
"acc_stderr": 0.03614665424180826,
"acc_norm": 0.6589595375722543,
"acc_norm_stderr": 0.03614665424180826
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.38235294117647056,
"acc_stderr": 0.04835503696107223,
"acc_norm": 0.38235294117647056,
"acc_norm_stderr": 0.04835503696107223
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.74,
"acc_stderr": 0.04408440022768077,
"acc_norm": 0.74,
"acc_norm_stderr": 0.04408440022768077
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.625531914893617,
"acc_stderr": 0.03163910665367291,
"acc_norm": 0.625531914893617,
"acc_norm_stderr": 0.03163910665367291
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5,
"acc_stderr": 0.047036043419179864,
"acc_norm": 0.5,
"acc_norm_stderr": 0.047036043419179864
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.6344827586206897,
"acc_stderr": 0.040131241954243856,
"acc_norm": 0.6344827586206897,
"acc_norm_stderr": 0.040131241954243856
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4947089947089947,
"acc_stderr": 0.02574986828855657,
"acc_norm": 0.4947089947089947,
"acc_norm_stderr": 0.02574986828855657
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.0442626668137991,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.0442626668137991
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621504,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621504
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.8096774193548387,
"acc_stderr": 0.022331707611823074,
"acc_norm": 0.8096774193548387,
"acc_norm_stderr": 0.022331707611823074
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5172413793103449,
"acc_stderr": 0.03515895551165698,
"acc_norm": 0.5172413793103449,
"acc_norm_stderr": 0.03515895551165698
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.8,
"acc_stderr": 0.031234752377721175,
"acc_norm": 0.8,
"acc_norm_stderr": 0.031234752377721175
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.8686868686868687,
"acc_stderr": 0.024063156416822516,
"acc_norm": 0.8686868686868687,
"acc_norm_stderr": 0.024063156416822516
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.9015544041450777,
"acc_stderr": 0.021500249576033467,
"acc_norm": 0.9015544041450777,
"acc_norm_stderr": 0.021500249576033467
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.658974358974359,
"acc_stderr": 0.02403548967633506,
"acc_norm": 0.658974358974359,
"acc_norm_stderr": 0.02403548967633506
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.37407407407407406,
"acc_stderr": 0.029502861128955286,
"acc_norm": 0.37407407407407406,
"acc_norm_stderr": 0.029502861128955286
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.7226890756302521,
"acc_stderr": 0.029079374539480007,
"acc_norm": 0.7226890756302521,
"acc_norm_stderr": 0.029079374539480007
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.36423841059602646,
"acc_stderr": 0.03929111781242741,
"acc_norm": 0.36423841059602646,
"acc_norm_stderr": 0.03929111781242741
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8440366972477065,
"acc_stderr": 0.015555802713590177,
"acc_norm": 0.8440366972477065,
"acc_norm_stderr": 0.015555802713590177
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5694444444444444,
"acc_stderr": 0.03376922151252335,
"acc_norm": 0.5694444444444444,
"acc_norm_stderr": 0.03376922151252335
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8529411764705882,
"acc_stderr": 0.02485747808025046,
"acc_norm": 0.8529411764705882,
"acc_norm_stderr": 0.02485747808025046
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8438818565400844,
"acc_stderr": 0.023627159460318667,
"acc_norm": 0.8438818565400844,
"acc_norm_stderr": 0.023627159460318667
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6771300448430493,
"acc_stderr": 0.03138147637575499,
"acc_norm": 0.6771300448430493,
"acc_norm_stderr": 0.03138147637575499
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7480916030534351,
"acc_stderr": 0.03807387116306086,
"acc_norm": 0.7480916030534351,
"acc_norm_stderr": 0.03807387116306086
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7851239669421488,
"acc_stderr": 0.037494924487096966,
"acc_norm": 0.7851239669421488,
"acc_norm_stderr": 0.037494924487096966
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8148148148148148,
"acc_stderr": 0.03755265865037182,
"acc_norm": 0.8148148148148148,
"acc_norm_stderr": 0.03755265865037182
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.754601226993865,
"acc_stderr": 0.03380939813943354,
"acc_norm": 0.754601226993865,
"acc_norm_stderr": 0.03380939813943354
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.45535714285714285,
"acc_stderr": 0.04726835553719099,
"acc_norm": 0.45535714285714285,
"acc_norm_stderr": 0.04726835553719099
},
"harness|hendrycksTest-management|5": {
"acc": 0.8543689320388349,
"acc_stderr": 0.03492606476623791,
"acc_norm": 0.8543689320388349,
"acc_norm_stderr": 0.03492606476623791
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8632478632478633,
"acc_stderr": 0.022509033937077812,
"acc_norm": 0.8632478632478633,
"acc_norm_stderr": 0.022509033937077812
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8020434227330779,
"acc_stderr": 0.01424887354921757,
"acc_norm": 0.8020434227330779,
"acc_norm_stderr": 0.01424887354921757
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7543352601156069,
"acc_stderr": 0.023176298203992005,
"acc_norm": 0.7543352601156069,
"acc_norm_stderr": 0.023176298203992005
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.38994413407821227,
"acc_stderr": 0.01631237662921307,
"acc_norm": 0.38994413407821227,
"acc_norm_stderr": 0.01631237662921307
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7679738562091504,
"acc_stderr": 0.024170840879340863,
"acc_norm": 0.7679738562091504,
"acc_norm_stderr": 0.024170840879340863
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7234726688102894,
"acc_stderr": 0.025403832978179615,
"acc_norm": 0.7234726688102894,
"acc_norm_stderr": 0.025403832978179615
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7870370370370371,
"acc_stderr": 0.022779719088733396,
"acc_norm": 0.7870370370370371,
"acc_norm_stderr": 0.022779719088733396
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.48936170212765956,
"acc_stderr": 0.02982074719142248,
"acc_norm": 0.48936170212765956,
"acc_norm_stderr": 0.02982074719142248
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.49282920469361147,
"acc_stderr": 0.012768922739553304,
"acc_norm": 0.49282920469361147,
"acc_norm_stderr": 0.012768922739553304
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.7426470588235294,
"acc_stderr": 0.02655651947004151,
"acc_norm": 0.7426470588235294,
"acc_norm_stderr": 0.02655651947004151
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.684640522875817,
"acc_stderr": 0.018798086284886883,
"acc_norm": 0.684640522875817,
"acc_norm_stderr": 0.018798086284886883
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6909090909090909,
"acc_stderr": 0.044262946482000985,
"acc_norm": 0.6909090909090909,
"acc_norm_stderr": 0.044262946482000985
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7306122448979592,
"acc_stderr": 0.02840125202902294,
"acc_norm": 0.7306122448979592,
"acc_norm_stderr": 0.02840125202902294
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8308457711442786,
"acc_stderr": 0.02650859065623327,
"acc_norm": 0.8308457711442786,
"acc_norm_stderr": 0.02650859065623327
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.9,
"acc_stderr": 0.030151134457776334,
"acc_norm": 0.9,
"acc_norm_stderr": 0.030151134457776334
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5903614457831325,
"acc_stderr": 0.038284011150790206,
"acc_norm": 0.5903614457831325,
"acc_norm_stderr": 0.038284011150790206
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.783625730994152,
"acc_stderr": 0.03158149539338733,
"acc_norm": 0.783625730994152,
"acc_norm_stderr": 0.03158149539338733
},
"harness|truthfulqa:mc|0": {
"mc1": 0.5703794369645043,
"mc1_stderr": 0.01732923458040909,
"mc2": 0.7252131907680065,
"mc2_stderr": 0.01501805188232875
},
"harness|winogrande|5": {
"acc": 0.8334648776637726,
"acc_stderr": 0.010470796496781093
},
"harness|gsm8k|5": {
"acc": 0.6338134950720242,
"acc_stderr": 0.013270100238748831
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_fblgit__UNA-SOLAR-10.7B-Instruct-v1.0 | [
"region:us"
] | 2023-12-21T16:29:56+00:00 | {"pretty_name": "Evaluation run of fblgit/UNA-SOLAR-10.7B-Instruct-v1.0", "dataset_summary": "Dataset automatically created during the evaluation run of model [fblgit/UNA-SOLAR-10.7B-Instruct-v1.0](https://huggingface.co/fblgit/UNA-SOLAR-10.7B-Instruct-v1.0) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_fblgit__UNA-SOLAR-10.7B-Instruct-v1.0\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-23T18:01:55.380326](https://huggingface.co/datasets/open-llm-leaderboard/details_fblgit__UNA-SOLAR-10.7B-Instruct-v1.0/blob/main/results_2023-12-23T18-01-55.380326.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6645232088105464,\n \"acc_stderr\": 0.03161209065220985,\n \"acc_norm\": 0.6655297512119899,\n \"acc_norm_stderr\": 0.03225370835754654,\n \"mc1\": 0.5703794369645043,\n \"mc1_stderr\": 0.01732923458040909,\n \"mc2\": 0.7252131907680065,\n \"mc2_stderr\": 0.01501805188232875\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6825938566552902,\n \"acc_stderr\": 0.013602239088038167,\n \"acc_norm\": 0.7073378839590444,\n \"acc_norm_stderr\": 0.013295916103619427\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.7103166699860586,\n \"acc_stderr\": 0.004526883021027635,\n \"acc_norm\": 0.8831905994821748,\n \"acc_norm_stderr\": 0.003205366051421356\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.41,\n \"acc_stderr\": 0.049431107042371025,\n \"acc_norm\": 0.41,\n \"acc_norm_stderr\": 0.049431107042371025\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6074074074074074,\n \"acc_stderr\": 0.04218506215368879,\n \"acc_norm\": 0.6074074074074074,\n \"acc_norm_stderr\": 0.04218506215368879\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.743421052631579,\n \"acc_stderr\": 0.0355418036802569,\n \"acc_norm\": 0.743421052631579,\n \"acc_norm_stderr\": 0.0355418036802569\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04351941398892446,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04351941398892446\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6716981132075471,\n \"acc_stderr\": 0.02890159361241178,\n \"acc_norm\": 0.6716981132075471,\n \"acc_norm_stderr\": 0.02890159361241178\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7777777777777778,\n \"acc_stderr\": 0.03476590104304134,\n \"acc_norm\": 0.7777777777777778,\n \"acc_norm_stderr\": 0.03476590104304134\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.04560480215720684,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.04560480215720684\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6589595375722543,\n \"acc_stderr\": 0.03614665424180826,\n \"acc_norm\": 0.6589595375722543,\n \"acc_norm_stderr\": 0.03614665424180826\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.38235294117647056,\n \"acc_stderr\": 0.04835503696107223,\n \"acc_norm\": 0.38235294117647056,\n \"acc_norm_stderr\": 0.04835503696107223\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.04408440022768077,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.04408440022768077\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.625531914893617,\n \"acc_stderr\": 0.03163910665367291,\n \"acc_norm\": 0.625531914893617,\n \"acc_norm_stderr\": 0.03163910665367291\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.047036043419179864,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.047036043419179864\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.6344827586206897,\n \"acc_stderr\": 0.040131241954243856,\n \"acc_norm\": 0.6344827586206897,\n \"acc_norm_stderr\": 0.040131241954243856\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4947089947089947,\n \"acc_stderr\": 0.02574986828855657,\n \"acc_norm\": 0.4947089947089947,\n \"acc_norm_stderr\": 0.02574986828855657\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.42857142857142855,\n \"acc_stderr\": 0.0442626668137991,\n \"acc_norm\": 0.42857142857142855,\n \"acc_norm_stderr\": 0.0442626668137991\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621504,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621504\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.8096774193548387,\n \"acc_stderr\": 0.022331707611823074,\n \"acc_norm\": 0.8096774193548387,\n \"acc_norm_stderr\": 0.022331707611823074\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5172413793103449,\n \"acc_stderr\": 0.03515895551165698,\n \"acc_norm\": 0.5172413793103449,\n \"acc_norm_stderr\": 0.03515895551165698\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.031234752377721175,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.031234752377721175\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.8686868686868687,\n \"acc_stderr\": 0.024063156416822516,\n \"acc_norm\": 0.8686868686868687,\n \"acc_norm_stderr\": 0.024063156416822516\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.9015544041450777,\n \"acc_stderr\": 0.021500249576033467,\n \"acc_norm\": 0.9015544041450777,\n \"acc_norm_stderr\": 0.021500249576033467\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.658974358974359,\n \"acc_stderr\": 0.02403548967633506,\n \"acc_norm\": 0.658974358974359,\n \"acc_norm_stderr\": 0.02403548967633506\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.37407407407407406,\n \"acc_stderr\": 0.029502861128955286,\n \"acc_norm\": 0.37407407407407406,\n \"acc_norm_stderr\": 0.029502861128955286\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.7226890756302521,\n \"acc_stderr\": 0.029079374539480007,\n \"acc_norm\": 0.7226890756302521,\n \"acc_norm_stderr\": 0.029079374539480007\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.36423841059602646,\n \"acc_stderr\": 0.03929111781242741,\n \"acc_norm\": 0.36423841059602646,\n \"acc_norm_stderr\": 0.03929111781242741\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8440366972477065,\n \"acc_stderr\": 0.015555802713590177,\n \"acc_norm\": 0.8440366972477065,\n \"acc_norm_stderr\": 0.015555802713590177\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5694444444444444,\n \"acc_stderr\": 0.03376922151252335,\n \"acc_norm\": 0.5694444444444444,\n \"acc_norm_stderr\": 0.03376922151252335\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8529411764705882,\n \"acc_stderr\": 0.02485747808025046,\n \"acc_norm\": 0.8529411764705882,\n \"acc_norm_stderr\": 0.02485747808025046\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8438818565400844,\n \"acc_stderr\": 0.023627159460318667,\n \"acc_norm\": 0.8438818565400844,\n \"acc_norm_stderr\": 0.023627159460318667\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6771300448430493,\n \"acc_stderr\": 0.03138147637575499,\n \"acc_norm\": 0.6771300448430493,\n \"acc_norm_stderr\": 0.03138147637575499\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7480916030534351,\n \"acc_stderr\": 0.03807387116306086,\n \"acc_norm\": 0.7480916030534351,\n \"acc_norm_stderr\": 0.03807387116306086\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7851239669421488,\n \"acc_stderr\": 0.037494924487096966,\n \"acc_norm\": 0.7851239669421488,\n \"acc_norm_stderr\": 0.037494924487096966\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8148148148148148,\n \"acc_stderr\": 0.03755265865037182,\n \"acc_norm\": 0.8148148148148148,\n \"acc_norm_stderr\": 0.03755265865037182\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.754601226993865,\n \"acc_stderr\": 0.03380939813943354,\n \"acc_norm\": 0.754601226993865,\n \"acc_norm_stderr\": 0.03380939813943354\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.45535714285714285,\n \"acc_stderr\": 0.04726835553719099,\n \"acc_norm\": 0.45535714285714285,\n \"acc_norm_stderr\": 0.04726835553719099\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.8543689320388349,\n \"acc_stderr\": 0.03492606476623791,\n \"acc_norm\": 0.8543689320388349,\n \"acc_norm_stderr\": 0.03492606476623791\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8632478632478633,\n \"acc_stderr\": 0.022509033937077812,\n \"acc_norm\": 0.8632478632478633,\n \"acc_norm_stderr\": 0.022509033937077812\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8020434227330779,\n \"acc_stderr\": 0.01424887354921757,\n \"acc_norm\": 0.8020434227330779,\n \"acc_norm_stderr\": 0.01424887354921757\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7543352601156069,\n \"acc_stderr\": 0.023176298203992005,\n \"acc_norm\": 0.7543352601156069,\n \"acc_norm_stderr\": 0.023176298203992005\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.38994413407821227,\n \"acc_stderr\": 0.01631237662921307,\n \"acc_norm\": 0.38994413407821227,\n \"acc_norm_stderr\": 0.01631237662921307\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7679738562091504,\n \"acc_stderr\": 0.024170840879340863,\n \"acc_norm\": 0.7679738562091504,\n \"acc_norm_stderr\": 0.024170840879340863\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7234726688102894,\n \"acc_stderr\": 0.025403832978179615,\n \"acc_norm\": 0.7234726688102894,\n \"acc_norm_stderr\": 0.025403832978179615\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7870370370370371,\n \"acc_stderr\": 0.022779719088733396,\n \"acc_norm\": 0.7870370370370371,\n \"acc_norm_stderr\": 0.022779719088733396\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.48936170212765956,\n \"acc_stderr\": 0.02982074719142248,\n \"acc_norm\": 0.48936170212765956,\n \"acc_norm_stderr\": 0.02982074719142248\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.49282920469361147,\n \"acc_stderr\": 0.012768922739553304,\n \"acc_norm\": 0.49282920469361147,\n \"acc_norm_stderr\": 0.012768922739553304\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.7426470588235294,\n \"acc_stderr\": 0.02655651947004151,\n \"acc_norm\": 0.7426470588235294,\n \"acc_norm_stderr\": 0.02655651947004151\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.684640522875817,\n \"acc_stderr\": 0.018798086284886883,\n \"acc_norm\": 0.684640522875817,\n \"acc_norm_stderr\": 0.018798086284886883\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6909090909090909,\n \"acc_stderr\": 0.044262946482000985,\n \"acc_norm\": 0.6909090909090909,\n \"acc_norm_stderr\": 0.044262946482000985\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7306122448979592,\n \"acc_stderr\": 0.02840125202902294,\n \"acc_norm\": 0.7306122448979592,\n \"acc_norm_stderr\": 0.02840125202902294\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8308457711442786,\n \"acc_stderr\": 0.02650859065623327,\n \"acc_norm\": 0.8308457711442786,\n \"acc_norm_stderr\": 0.02650859065623327\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.9,\n \"acc_stderr\": 0.030151134457776334,\n \"acc_norm\": 0.9,\n \"acc_norm_stderr\": 0.030151134457776334\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5903614457831325,\n \"acc_stderr\": 0.038284011150790206,\n \"acc_norm\": 0.5903614457831325,\n \"acc_norm_stderr\": 0.038284011150790206\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.783625730994152,\n \"acc_stderr\": 0.03158149539338733,\n \"acc_norm\": 0.783625730994152,\n \"acc_norm_stderr\": 0.03158149539338733\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.5703794369645043,\n \"mc1_stderr\": 0.01732923458040909,\n \"mc2\": 0.7252131907680065,\n \"mc2_stderr\": 0.01501805188232875\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8334648776637726,\n \"acc_stderr\": 0.010470796496781093\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.6338134950720242,\n \"acc_stderr\": 0.013270100238748831\n }\n}\n```", "repo_url": "https://huggingface.co/fblgit/UNA-SOLAR-10.7B-Instruct-v1.0", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|arc:challenge|25_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|arc:challenge|25_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|arc:challenge|25_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|gsm8k|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|gsm8k|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|gsm8k|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hellaswag|10_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hellaswag|10_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hellaswag|10_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-21T16-27-41.332399.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-22T06-35-48.209976.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-23T18-01-55.380326.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["**/details_harness|winogrande|5_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["**/details_harness|winogrande|5_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["**/details_harness|winogrande|5_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-23T18-01-55.380326.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_21T16_27_41.332399", "path": ["results_2023-12-21T16-27-41.332399.parquet"]}, {"split": "2023_12_22T06_35_48.209976", "path": ["results_2023-12-22T06-35-48.209976.parquet"]}, {"split": "2023_12_23T18_01_55.380326", "path": ["results_2023-12-23T18-01-55.380326.parquet"]}, {"split": "latest", "path": ["results_2023-12-23T18-01-55.380326.parquet"]}]}]} | 2023-12-23T18:04:18+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of fblgit/UNA-SOLAR-10.7B-Instruct-v1.0
Dataset automatically created during the evaluation run of model fblgit/UNA-SOLAR-10.7B-Instruct-v1.0 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-23T18:01:55.380326(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of fblgit/UNA-SOLAR-10.7B-Instruct-v1.0\n\n\n\nDataset automatically created during the evaluation run of model fblgit/UNA-SOLAR-10.7B-Instruct-v1.0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-23T18:01:55.380326(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of fblgit/UNA-SOLAR-10.7B-Instruct-v1.0\n\n\n\nDataset automatically created during the evaluation run of model fblgit/UNA-SOLAR-10.7B-Instruct-v1.0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-23T18:01:55.380326(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
197,
67,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of fblgit/UNA-SOLAR-10.7B-Instruct-v1.0\n\n\n\nDataset automatically created during the evaluation run of model fblgit/UNA-SOLAR-10.7B-Instruct-v1.0 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 2 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-23T18:01:55.380326(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]"
] |
ca5c9290a5dafe6d17eba38b41414b2464adb375 | Updated object meshes from the bigbird dataset [https://rll.berkeley.edu/bigbird/] to have normals facing outward and to be (mostly) watertight. | ll4ma-lab/bigbird-meshes-fixed-normals | [
"region:us"
] | 2023-12-21T16:38:07+00:00 | {} | 2023-12-21T18:52:27+00:00 | [] | [] | TAGS
#region-us
| Updated object meshes from the bigbird dataset [URL to have normals facing outward and to be (mostly) watertight. | [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
687e5b0e0ad3a092aba249b323117b6d596f795a | # Dataset Card for "fashion_image_caption-100-v212"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | limenous/fashion_image_caption-100-v212 | [
"region:us"
] | 2023-12-21T17:12:32+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "text", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 22820471.0, "num_examples": 100}], "download_size": 22820373, "dataset_size": 22820471.0}} | 2023-12-21T17:12:34+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "fashion_image_caption-100-v212"
More Information needed | [
"# Dataset Card for \"fashion_image_caption-100-v212\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"fashion_image_caption-100-v212\"\n\nMore Information needed"
] | [
6,
21
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"fashion_image_caption-100-v212\"\n\nMore Information needed"
] |
b89c382aea2081305b5ddac05c9424693970dc1b | # Dataset Card for "apache_bugs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Partha117/apache_bugs | [
"region:us"
] | 2023-12-21T17:13:59+00:00 | {"dataset_info": {"features": [{"name": "id", "dtype": "int64"}, {"name": "bug_id", "dtype": "int64"}, {"name": "summary", "dtype": "string"}, {"name": "description", "dtype": "string"}, {"name": "report_time", "dtype": "string"}, {"name": "report_timestamp", "dtype": "int64"}, {"name": "status", "dtype": "string"}, {"name": "commit", "dtype": "string"}, {"name": "commit_timestamp", "dtype": "int64"}, {"name": "files", "dtype": "string"}, {"name": "project_name", "dtype": "string"}, {"name": "github_url", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 22350916.0476986, "num_examples": 16947}, {"name": "test", "num_bytes": 7649454.952301403, "num_examples": 5800}], "download_size": 10186111, "dataset_size": 30000371.0}} | 2023-12-21T17:14:14+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "apache_bugs"
More Information needed | [
"# Dataset Card for \"apache_bugs\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"apache_bugs\"\n\nMore Information needed"
] | [
6,
15
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"apache_bugs\"\n\nMore Information needed"
] |
8a68e83da306ea43d1a196e7974acbfcccba5c1a |
# Weight Systems Defining Five-Dimensional IP Lattice Polytopes
This dataset contains all weight systems defining five-dimensional reflexive and
non-reflexive IP lattice polytopes, instrumental in the study of Calabi-Yau fourfolds in
mathematics and theoretical physics. The data was compiled by Harald Skarke and Friedrich
Schöller in [arXiv:1808.02422](https://arxiv.org/abs/1808.02422). More information is
available at the [Calabi-Yau data website](http://hep.itp.tuwien.ac.at/~kreuzer/CY/). The
dataset can be explored using the [search
frontend](http://rgc.itp.tuwien.ac.at/fourfolds/). See below for a short mathematical
exposition on the construction of polytopes.
Please cite the paper when referencing this dataset:
```
@article{Scholler:2018apc,
author = {Schöller, Friedrich and Skarke, Harald},
title = "{All Weight Systems for Calabi-Yau Fourfolds from Reflexive Polyhedra}",
eprint = "1808.02422",
archivePrefix = "arXiv",
primaryClass = "hep-th",
doi = "10.1007/s00220-019-03331-9",
journal = "Commun. Math. Phys.",
volume = "372",
number = "2",
pages = "657--678",
year = "2019"
}
```
## Dataset Details
The dataset consists of two subsets: weight systems defining reflexive (and therefore IP)
polytopes and weight systems defining non-reflexive IP polytopes. Each subset is split
into 4000 files in Parquet format. Rows within each file are sorted lexicographically by
weights. There are 185,269,499,015 weight systems defining reflexive polytopes and
137,114,261,915 defining non-reflexive polytopes, making a total of 322,383,760,930 IP
weight systems.
Each row in the dataset represents a polytope and contains the six weights defining it,
along with the vertex count, facet count, and lattice point count. The reflexive dataset
also includes the Hodge numbers \\( h^{1,1} \\), \\( h^{1,2} \\), and \\( h^{1,3} \\) of
the corresponding Calabi-Yau manifold, and the lattice point count of the dual polytope.
For any Calabi-Yau fourfold, the Euler characteristic \\( \chi \\) and the Hodge number
\\( h^{2,2} \\) can be derived as follows:
$$ \chi = 48 + 6 (h^{1,1} − h^{1,2} + h^{1,3}) $$
$$ h^{2,2} = 44 + 4 h^{1,1} − 2 h^{1,2} + 4 h^{1,3} $$
This dataset is licensed under the
[CC BY-SA 4.0 license](http://creativecommons.org/licenses/by-sa/4.0/).
### Data Fields
- `weight0` to `weight5`: Weights of the weight system defining the polytope.
- `vertex_count`: Vertex count of the polytope.
- `facet_count`: Facet count of the polytope.
- `point_count`: Lattice point count of the polytope.
- `dual_point_count`: Lattice point count of the dual polytope (only for reflexive
polytopes).
- `h11`: Hodge number \\( h^{1,1} \\) (only for reflexive polytopes).
- `h12`: Hodge number \\( h^{1,2} \\) (only for reflexive polytopes).
- `h13`: Hodge number \\( h^{1,3} \\) (only for reflexive polytopes).
## Usage
The dataset can be used without downloading it entirely, thanks to the streaming
capability of the `datasets` library. The following Python code snippet demonstrates how
to stream the dataset and print the first five rows:
```python
from datasets import load_dataset
dataset = load_dataset("calabi-yau-data/ws-5d", name="reflexive", split="full", streaming=True)
for row in dataset.take(5):
print(row)
```
When cloning the Git repository with Git Large File Storage (LFS), data files are stored
both in the Git LFS storage directory and in the working tree. To avoid occupying double
the disk space, use a filesystem that supports copy-on-write, and run the following
commands to clone the repository:
```bash
# Initialize Git LFS
git lfs install
# Clone the repository without downloading LFS files immediately
GIT_LFS_SKIP_SMUDGE=1 git clone https://huggingface.co/datasets/calabi-yau-data/ws-5d
# Change to the repository directory
cd ws-5d
# Test deduplication (optional)
git lfs dedup --test
# Download the LFS files
git lfs fetch
# Create working tree files as clones of the files in the Git LFS storage directory using
# copy-on-write functionality
git lfs dedup
```
## Construction of Polytopes
This is an introduction to the mathematics involved in the construction of polytopes
relevant to this dataset. For more details and precise definitions, consult the paper
[arXiv:1808.02422](https://arxiv.org/abs/1808.02422) and references therein.
### Polytopes
A polytope is the convex hull of a finite set of points in \\(n\\)-dimensional Euclidean
space, \\(\mathbb{R}^n\\). This means it is the smallest convex shape that contains all
these points. The minimal collection of points that define a particular polytope are its
vertices. Familiar examples of polytopes include triangles and rectangles in two
dimensions, and cubes and octahedra in three dimensions.
A polytope is considered an *IP polytope* (interior point polytope) if the origin of
\\(\mathbb{R}^n\\) is in the interior of the polytope, not on its boundary or outside it.
For any IP polytope \\(\nabla\\), its dual polytope \\(\nabla^*\\) is defined as the set
of points \\(\mathbf{y}\\) satisfying
$$
\mathbf{x} \cdot \mathbf{y}
\ge -1 \quad \text{for all } \mathbf{x} \in \nabla \;.
$$
This relationship is symmetric: the dual of the dual of an IP polytope is the polytope
itself, i.e., \\( \nabla^{**} = \nabla \\).
### Weight Systems
Weight systems provide a means to describe simple polytopes known as *simplices*. A weight
system is a tuple of real numbers. The construction process is outlined as follows:
Consider an \\(n\\)-dimensional simplex in \\(\mathbb{R}^n\\), i.e., a polytope in
\\(\mathbb{R}^n\\) with vertex count \\(n + 1\\) and \\(n\\) of its edges extending in
linearly independent directions. It is possible to position \\(n\\) of its vertices at
arbitrary (linearly independent) locations through a linear transformation. The placement
of the remaining vertex is then determined. Its position is the defining property of the
simplex. To specify the position independently of the applied linear transformation, one
can use the following equation. If \\(\mathbf{v}_0, \mathbf{v}_1, \dots, \mathbf{v}_n\\)
are the vertices of the simplex, this relation fixes one vertex in terms of the other
\\(n\\):
$$ \sum_{i=0}^n q_i \mathbf{v}_i = 0 \;, $$
where \\(q_i\\) is the tuple of real numbers, the weight system.
It is important to note that scaling all weights in a weight system by a common factor
results in an equivalent weight system that defines the same simplex.
The condition that a simplex is an IP simplex is equivalent to the condition that all
weights in its weight system are bigger than zero.
For this dataset, the focus is on a specific construction of lattice polytopes described
in subsequent sections.
### Lattice Polytopes
A lattice polytope is a polytope with vertices at the points of a regular grid, or
lattice. Using linear transformations, any lattice polytope can be transformed so that its
vertices have integer coordinates, hence they are also referred to as integral
polytopes.
The dual of a lattice with points \\(L\\) is the lattice consisting of all points
\\(\mathbf{y}\\) that satisfy
$$
\mathbf{x} \cdot \mathbf{y} \in \mathbb{Z} \quad \text{for all } \mathbf{x} \in L \;.
$$
*Reflexive polytopes* are a specific type of lattice polytope characterized by having a
dual that is also a lattice polytope, with vertices situated on the dual lattice. These
polytopes play a central role in the context of this dataset.
The weights of a lattice polytope are always rational. This characteristic enables the
rescaling of a weight system so that its weights become integers without any common
divisor. This rescaling has been performed in this dataset.
The construction of the lattice polytopes from this dataset works as follows: We start
with the simplex \\(\nabla\\), arising from a weight system as previously described. Then,
we define the polytope \\(\Delta\\) as the convex hull of the intersection of
\\(\nabla^*\\) with the points of the dual lattice. In the context of this dataset, the
polytope \\(\Delta\\) is referred to as ‘the polytope’. Correspondingly,
\\(\Delta^{\!*}\\) is referred to as ‘the dual polytope’. The lattice of \\(\nabla\\) and
\\(\Delta^{\!*}\\) is taken to be the coarsest lattice possible, such that \\(\nabla\\) is
a lattice polytope, i.e., the lattice generated by the vertices of \\(\nabla\\). This
construction is exemplified in the following sections.
A weight system is considered an IP weight system if the corresponding \\(\Delta\\) is an
IP polytope; that is, the origin is within its interior. Since only IP polytopes have
corresponding dual polytopes, this condition is essential for the polytope \\(\Delta\\) to
be classified as reflexive.
### Two Dimensions
In two dimensions, all IP weight systems define reflexive polytopes and every vertex of
\\(\nabla^*\\) lies on the dual lattice, making \\(\Delta\\) and \\(\nabla^*\\) identical.
There are exactly three IP weight systems that define two-dimensional polytopes
(polygons). Each polytope is reflexive and has three vertices and three facets (edges):
| weight system | number of points of \\(\nabla\\) | number of points of \\(\nabla^*\\) |
|--------------:|---------------------------------:|-----------------------------------:|
| (1, 1, 1) | 4 | 10 |
| (1, 1, 2) | 5 | 9 |
| (1, 2, 3) | 7 | 7 |
The polytopes and their duals are depicted below. Lattice points are indicated by dots.
<img src="pictures/ws-2d.png" style="display: block; margin-left: auto; margin-right: auto; width:520px;">
### General Dimension
In higher dimensions, the situation becomes more complex. Not all IP polytopes are
reflexive, and generally, \\(\Delta \neq \nabla^*\\).
This example shows the construction of the three-dimensional polytope \\(\Delta\\) with
weight system (2, 3, 4, 5) and its dual \\(\Delta^{\!*}\\). Lattice points lying on the
polytopes are indicated by dots. \\(\Delta\\) has 7 vertices and 13 lattice points,
\\(\Delta^{\!*}\\) also has 7 vertices, but 16 lattice points.
<img src="pictures/ws-3d-2-3-4-5.png" style="display: block; margin-left: auto; margin-right: auto; width:450px;">
The counts of reflexive single-weight-system polytopes by dimension \\(n\\) are:
| \\(n\\) | reflexive single-weight-system polytopes |
|--------:|-----------------------------------------:|
| 2 | 3 |
| 3 | 95 |
| 4 | 184,026 |
| 5 | (this dataset) 185,269,499,015 |
One should note that distinct weight systems may well lead to the same polytope (we have
not checked how often this occurs). In particular it seems that polytopes with a small
number of lattice points are generated many times.
| calabi-yau-data/ws-5d | [
"size_categories:100B<n<1T",
"license:cc-by-sa-4.0",
"physics",
"math",
"arxiv:1808.02422",
"region:us"
] | 2023-12-21T17:16:10+00:00 | {"license": "cc-by-sa-4.0", "size_categories": ["100B<n<1T"], "pretty_name": "Weight Systems Defining Five-Dimensional IP Lattice Polytopes", "configs": [{"config_name": "non-reflexive", "data_files": [{"split": "full", "path": "non-reflexive/*.parquet"}]}, {"config_name": "reflexive", "data_files": [{"split": "full", "path": "reflexive/*.parquet"}]}], "tags": ["physics", "math"]} | 2024-01-18T12:48:46+00:00 | [
"1808.02422"
] | [] | TAGS
#size_categories-100B<n<1T #license-cc-by-sa-4.0 #physics #math #arxiv-1808.02422 #region-us
| Weight Systems Defining Five-Dimensional IP Lattice Polytopes
=============================================================
This dataset contains all weight systems defining five-dimensional reflexive and
non-reflexive IP lattice polytopes, instrumental in the study of Calabi-Yau fourfolds in
mathematics and theoretical physics. The data was compiled by Harald Skarke and Friedrich
Schöller in arXiv:1808.02422. More information is
available at the Calabi-Yau data website. The
dataset can be explored using the search
frontend. See below for a short mathematical
exposition on the construction of polytopes.
Please cite the paper when referencing this dataset:
Dataset Details
---------------
The dataset consists of two subsets: weight systems defining reflexive (and therefore IP)
polytopes and weight systems defining non-reflexive IP polytopes. Each subset is split
into 4000 files in Parquet format. Rows within each file are sorted lexicographically by
weights. There are 185,269,499,015 weight systems defining reflexive polytopes and
137,114,261,915 defining non-reflexive polytopes, making a total of 322,383,760,930 IP
weight systems.
Each row in the dataset represents a polytope and contains the six weights defining it,
along with the vertex count, facet count, and lattice point count. The reflexive dataset
also includes the Hodge numbers \( h^{1,1} \), \( h^{1,2} \), and \( h^{1,3} \) of
the corresponding Calabi-Yau manifold, and the lattice point count of the dual polytope.
For any Calabi-Yau fourfold, the Euler characteristic \( \chi \) and the Hodge number
\( h^{2,2} \) can be derived as follows:
$$ \chi = 48 + 6 (h^{1,1} − h^{1,2} + h^{1,3}) $$
$$ h^{2,2} = 44 + 4 h^{1,1} − 2 h^{1,2} + 4 h^{1,3} $$
This dataset is licensed under the
CC BY-SA 4.0 license.
### Data Fields
* 'weight0' to 'weight5': Weights of the weight system defining the polytope.
* 'vertex\_count': Vertex count of the polytope.
* 'facet\_count': Facet count of the polytope.
* 'point\_count': Lattice point count of the polytope.
* 'dual\_point\_count': Lattice point count of the dual polytope (only for reflexive
polytopes).
* 'h11': Hodge number \( h^{1,1} \) (only for reflexive polytopes).
* 'h12': Hodge number \( h^{1,2} \) (only for reflexive polytopes).
* 'h13': Hodge number \( h^{1,3} \) (only for reflexive polytopes).
Usage
-----
The dataset can be used without downloading it entirely, thanks to the streaming
capability of the 'datasets' library. The following Python code snippet demonstrates how
to stream the dataset and print the first five rows:
When cloning the Git repository with Git Large File Storage (LFS), data files are stored
both in the Git LFS storage directory and in the working tree. To avoid occupying double
the disk space, use a filesystem that supports copy-on-write, and run the following
commands to clone the repository:
Construction of Polytopes
-------------------------
This is an introduction to the mathematics involved in the construction of polytopes
relevant to this dataset. For more details and precise definitions, consult the paper
arXiv:1808.02422 and references therein.
### Polytopes
A polytope is the convex hull of a finite set of points in \(n\)-dimensional Euclidean
space, \(\mathbb{R}^n\). This means it is the smallest convex shape that contains all
these points. The minimal collection of points that define a particular polytope are its
vertices. Familiar examples of polytopes include triangles and rectangles in two
dimensions, and cubes and octahedra in three dimensions.
A polytope is considered an *IP polytope* (interior point polytope) if the origin of
\(\mathbb{R}^n\) is in the interior of the polytope, not on its boundary or outside it.
For any IP polytope \(\nabla\), its dual polytope \(\nabla^\*\) is defined as the set
of points \(\mathbf{y}\) satisfying
$$
\mathbf{x} \cdot \mathbf{y}
\ge -1 \quad \text{for all } \mathbf{x} \in \nabla ;.
$$
This relationship is symmetric: the dual of the dual of an IP polytope is the polytope
itself, i.e., \( \nabla^{} = \nabla \).
### Weight Systems
Weight systems provide a means to describe simple polytopes known as *simplices*. A weight
system is a tuple of real numbers. The construction process is outlined as follows:
Consider an \(n\)-dimensional simplex in \(\mathbb{R}^n\), i.e., a polytope in
\(\mathbb{R}^n\) with vertex count \(n + 1\) and \(n\) of its edges extending in
linearly independent directions. It is possible to position \(n\) of its vertices at
arbitrary (linearly independent) locations through a linear transformation. The placement
of the remaining vertex is then determined. Its position is the defining property of the
simplex. To specify the position independently of the applied linear transformation, one
can use the following equation. If \(\mathbf{v}\_0, \mathbf{v}\_1, \dots, \mathbf{v}\_n\)
are the vertices of the simplex, this relation fixes one vertex in terms of the other
\(n\):
$$ \sum\_{i=0}^n q\_i \mathbf{v}\_i = 0 ;, $$
where \(q\_i\) is the tuple of real numbers, the weight system.
It is important to note that scaling all weights in a weight system by a common factor
results in an equivalent weight system that defines the same simplex.
The condition that a simplex is an IP simplex is equivalent to the condition that all
weights in its weight system are bigger than zero.
For this dataset, the focus is on a specific construction of lattice polytopes described
in subsequent sections.
### Lattice Polytopes
A lattice polytope is a polytope with vertices at the points of a regular grid, or
lattice. Using linear transformations, any lattice polytope can be transformed so that its
vertices have integer coordinates, hence they are also referred to as integral
polytopes.
The dual of a lattice with points \(L\) is the lattice consisting of all points
\(\mathbf{y}\) that satisfy
$$
\mathbf{x} \cdot \mathbf{y} \in \mathbb{Z} \quad \text{for all } \mathbf{x} \in L ;.
$$
*Reflexive polytopes* are a specific type of lattice polytope characterized by having a
dual that is also a lattice polytope, with vertices situated on the dual lattice. These
polytopes play a central role in the context of this dataset.
The weights of a lattice polytope are always rational. This characteristic enables the
rescaling of a weight system so that its weights become integers without any common
divisor. This rescaling has been performed in this dataset.
The construction of the lattice polytopes from this dataset works as follows: We start
with the simplex \(\nabla\), arising from a weight system as previously described. Then,
we define the polytope \(\Delta\) as the convex hull of the intersection of
\(\nabla^*\) with the points of the dual lattice. In the context of this dataset, the
polytope \(\Delta\) is referred to as ‘the polytope’. Correspondingly,
\(\Delta^{!*}\) is referred to as ‘the dual polytope’. The lattice of \(\nabla\) and
\(\Delta^{!\*}\) is taken to be the coarsest lattice possible, such that \(\nabla\) is
a lattice polytope, i.e., the lattice generated by the vertices of \(\nabla\). This
construction is exemplified in the following sections.
A weight system is considered an IP weight system if the corresponding \(\Delta\) is an
IP polytope; that is, the origin is within its interior. Since only IP polytopes have
corresponding dual polytopes, this condition is essential for the polytope \(\Delta\) to
be classified as reflexive.
### Two Dimensions
In two dimensions, all IP weight systems define reflexive polytopes and every vertex of
\(\nabla^*\) lies on the dual lattice, making \(\Delta\) and \(\nabla^*\) identical.
There are exactly three IP weight systems that define two-dimensional polytopes
(polygons). Each polytope is reflexive and has three vertices and three facets (edges):
The polytopes and their duals are depicted below. Lattice points are indicated by dots.

### General Dimension
In higher dimensions, the situation becomes more complex. Not all IP polytopes are
reflexive, and generally, \(\Delta \neq \nabla^\*\).
This example shows the construction of the three-dimensional polytope \(\Delta\) with
weight system (2, 3, 4, 5) and its dual \(\Delta^{!*}\). Lattice points lying on the
polytopes are indicated by dots. \(\Delta\) has 7 vertices and 13 lattice points,
\(\Delta^{!*}\) also has 7 vertices, but 16 lattice points.

The counts of reflexive single-weight-system polytopes by dimension \(n\) are:
One should note that distinct weight systems may well lead to the same polytope (we have
not checked how often this occurs). In particular it seems that polytopes with a small
number of lattice points are generated many times.
| [
"### Data Fields\n\n\n* 'weight0' to 'weight5': Weights of the weight system defining the polytope.\n* 'vertex\\_count': Vertex count of the polytope.\n* 'facet\\_count': Facet count of the polytope.\n* 'point\\_count': Lattice point count of the polytope.\n* 'dual\\_point\\_count': Lattice point count of the dual polytope (only for reflexive\npolytopes).\n* 'h11': Hodge number \\( h^{1,1} \\) (only for reflexive polytopes).\n* 'h12': Hodge number \\( h^{1,2} \\) (only for reflexive polytopes).\n* 'h13': Hodge number \\( h^{1,3} \\) (only for reflexive polytopes).\n\n\nUsage\n-----\n\n\nThe dataset can be used without downloading it entirely, thanks to the streaming\ncapability of the 'datasets' library. The following Python code snippet demonstrates how\nto stream the dataset and print the first five rows:\n\n\nWhen cloning the Git repository with Git Large File Storage (LFS), data files are stored\nboth in the Git LFS storage directory and in the working tree. To avoid occupying double\nthe disk space, use a filesystem that supports copy-on-write, and run the following\ncommands to clone the repository:\n\n\nConstruction of Polytopes\n-------------------------\n\n\nThis is an introduction to the mathematics involved in the construction of polytopes\nrelevant to this dataset. For more details and precise definitions, consult the paper\narXiv:1808.02422 and references therein.",
"### Polytopes\n\n\nA polytope is the convex hull of a finite set of points in \\(n\\)-dimensional Euclidean\nspace, \\(\\mathbb{R}^n\\). This means it is the smallest convex shape that contains all\nthese points. The minimal collection of points that define a particular polytope are its\nvertices. Familiar examples of polytopes include triangles and rectangles in two\ndimensions, and cubes and octahedra in three dimensions.\n\n\nA polytope is considered an *IP polytope* (interior point polytope) if the origin of\n\\(\\mathbb{R}^n\\) is in the interior of the polytope, not on its boundary or outside it.\n\n\nFor any IP polytope \\(\\nabla\\), its dual polytope \\(\\nabla^\\*\\) is defined as the set\nof points \\(\\mathbf{y}\\) satisfying\n\n\n$$\n\\mathbf{x} \\cdot \\mathbf{y}\n\\ge -1 \\quad \\text{for all } \\mathbf{x} \\in \\nabla ;.\n$$\n\n\nThis relationship is symmetric: the dual of the dual of an IP polytope is the polytope\nitself, i.e., \\( \\nabla^{} = \\nabla \\).",
"### Weight Systems\n\n\nWeight systems provide a means to describe simple polytopes known as *simplices*. A weight\nsystem is a tuple of real numbers. The construction process is outlined as follows:\n\n\nConsider an \\(n\\)-dimensional simplex in \\(\\mathbb{R}^n\\), i.e., a polytope in\n\\(\\mathbb{R}^n\\) with vertex count \\(n + 1\\) and \\(n\\) of its edges extending in\nlinearly independent directions. It is possible to position \\(n\\) of its vertices at\narbitrary (linearly independent) locations through a linear transformation. The placement\nof the remaining vertex is then determined. Its position is the defining property of the\nsimplex. To specify the position independently of the applied linear transformation, one\ncan use the following equation. If \\(\\mathbf{v}\\_0, \\mathbf{v}\\_1, \\dots, \\mathbf{v}\\_n\\)\nare the vertices of the simplex, this relation fixes one vertex in terms of the other\n\\(n\\):\n\n\n$$ \\sum\\_{i=0}^n q\\_i \\mathbf{v}\\_i = 0 ;, $$\n\n\nwhere \\(q\\_i\\) is the tuple of real numbers, the weight system.\n\n\nIt is important to note that scaling all weights in a weight system by a common factor\nresults in an equivalent weight system that defines the same simplex.\n\n\nThe condition that a simplex is an IP simplex is equivalent to the condition that all\nweights in its weight system are bigger than zero.\n\n\nFor this dataset, the focus is on a specific construction of lattice polytopes described\nin subsequent sections.",
"### Lattice Polytopes\n\n\nA lattice polytope is a polytope with vertices at the points of a regular grid, or\nlattice. Using linear transformations, any lattice polytope can be transformed so that its\nvertices have integer coordinates, hence they are also referred to as integral\npolytopes.\n\n\nThe dual of a lattice with points \\(L\\) is the lattice consisting of all points\n\\(\\mathbf{y}\\) that satisfy\n\n\n$$\n\\mathbf{x} \\cdot \\mathbf{y} \\in \\mathbb{Z} \\quad \\text{for all } \\mathbf{x} \\in L ;.\n$$\n\n\n*Reflexive polytopes* are a specific type of lattice polytope characterized by having a\ndual that is also a lattice polytope, with vertices situated on the dual lattice. These\npolytopes play a central role in the context of this dataset.\n\n\nThe weights of a lattice polytope are always rational. This characteristic enables the\nrescaling of a weight system so that its weights become integers without any common\ndivisor. This rescaling has been performed in this dataset.\n\n\nThe construction of the lattice polytopes from this dataset works as follows: We start\nwith the simplex \\(\\nabla\\), arising from a weight system as previously described. Then,\nwe define the polytope \\(\\Delta\\) as the convex hull of the intersection of\n\\(\\nabla^*\\) with the points of the dual lattice. In the context of this dataset, the\npolytope \\(\\Delta\\) is referred to as ‘the polytope’. Correspondingly,\n\\(\\Delta^{!*}\\) is referred to as ‘the dual polytope’. The lattice of \\(\\nabla\\) and\n\\(\\Delta^{!\\*}\\) is taken to be the coarsest lattice possible, such that \\(\\nabla\\) is\na lattice polytope, i.e., the lattice generated by the vertices of \\(\\nabla\\). This\nconstruction is exemplified in the following sections.\n\n\nA weight system is considered an IP weight system if the corresponding \\(\\Delta\\) is an\nIP polytope; that is, the origin is within its interior. Since only IP polytopes have\ncorresponding dual polytopes, this condition is essential for the polytope \\(\\Delta\\) to\nbe classified as reflexive.",
"### Two Dimensions\n\n\nIn two dimensions, all IP weight systems define reflexive polytopes and every vertex of\n\\(\\nabla^*\\) lies on the dual lattice, making \\(\\Delta\\) and \\(\\nabla^*\\) identical.\nThere are exactly three IP weight systems that define two-dimensional polytopes\n(polygons). Each polytope is reflexive and has three vertices and three facets (edges):\n\n\n\nThe polytopes and their duals are depicted below. Lattice points are indicated by dots.\n",
"### General Dimension\n\n\nIn higher dimensions, the situation becomes more complex. Not all IP polytopes are\nreflexive, and generally, \\(\\Delta \\neq \\nabla^\\*\\).\n\n\nThis example shows the construction of the three-dimensional polytope \\(\\Delta\\) with\nweight system (2, 3, 4, 5) and its dual \\(\\Delta^{!*}\\). Lattice points lying on the\npolytopes are indicated by dots. \\(\\Delta\\) has 7 vertices and 13 lattice points,\n\\(\\Delta^{!*}\\) also has 7 vertices, but 16 lattice points.\n\n\n\nThe counts of reflexive single-weight-system polytopes by dimension \\(n\\) are:\n\n\n\nOne should note that distinct weight systems may well lead to the same polytope (we have\nnot checked how often this occurs). In particular it seems that polytopes with a small\nnumber of lattice points are generated many times."
] | [
"TAGS\n#size_categories-100B<n<1T #license-cc-by-sa-4.0 #physics #math #arxiv-1808.02422 #region-us \n",
"### Data Fields\n\n\n* 'weight0' to 'weight5': Weights of the weight system defining the polytope.\n* 'vertex\\_count': Vertex count of the polytope.\n* 'facet\\_count': Facet count of the polytope.\n* 'point\\_count': Lattice point count of the polytope.\n* 'dual\\_point\\_count': Lattice point count of the dual polytope (only for reflexive\npolytopes).\n* 'h11': Hodge number \\( h^{1,1} \\) (only for reflexive polytopes).\n* 'h12': Hodge number \\( h^{1,2} \\) (only for reflexive polytopes).\n* 'h13': Hodge number \\( h^{1,3} \\) (only for reflexive polytopes).\n\n\nUsage\n-----\n\n\nThe dataset can be used without downloading it entirely, thanks to the streaming\ncapability of the 'datasets' library. The following Python code snippet demonstrates how\nto stream the dataset and print the first five rows:\n\n\nWhen cloning the Git repository with Git Large File Storage (LFS), data files are stored\nboth in the Git LFS storage directory and in the working tree. To avoid occupying double\nthe disk space, use a filesystem that supports copy-on-write, and run the following\ncommands to clone the repository:\n\n\nConstruction of Polytopes\n-------------------------\n\n\nThis is an introduction to the mathematics involved in the construction of polytopes\nrelevant to this dataset. For more details and precise definitions, consult the paper\narXiv:1808.02422 and references therein.",
"### Polytopes\n\n\nA polytope is the convex hull of a finite set of points in \\(n\\)-dimensional Euclidean\nspace, \\(\\mathbb{R}^n\\). This means it is the smallest convex shape that contains all\nthese points. The minimal collection of points that define a particular polytope are its\nvertices. Familiar examples of polytopes include triangles and rectangles in two\ndimensions, and cubes and octahedra in three dimensions.\n\n\nA polytope is considered an *IP polytope* (interior point polytope) if the origin of\n\\(\\mathbb{R}^n\\) is in the interior of the polytope, not on its boundary or outside it.\n\n\nFor any IP polytope \\(\\nabla\\), its dual polytope \\(\\nabla^\\*\\) is defined as the set\nof points \\(\\mathbf{y}\\) satisfying\n\n\n$$\n\\mathbf{x} \\cdot \\mathbf{y}\n\\ge -1 \\quad \\text{for all } \\mathbf{x} \\in \\nabla ;.\n$$\n\n\nThis relationship is symmetric: the dual of the dual of an IP polytope is the polytope\nitself, i.e., \\( \\nabla^{} = \\nabla \\).",
"### Weight Systems\n\n\nWeight systems provide a means to describe simple polytopes known as *simplices*. A weight\nsystem is a tuple of real numbers. The construction process is outlined as follows:\n\n\nConsider an \\(n\\)-dimensional simplex in \\(\\mathbb{R}^n\\), i.e., a polytope in\n\\(\\mathbb{R}^n\\) with vertex count \\(n + 1\\) and \\(n\\) of its edges extending in\nlinearly independent directions. It is possible to position \\(n\\) of its vertices at\narbitrary (linearly independent) locations through a linear transformation. The placement\nof the remaining vertex is then determined. Its position is the defining property of the\nsimplex. To specify the position independently of the applied linear transformation, one\ncan use the following equation. If \\(\\mathbf{v}\\_0, \\mathbf{v}\\_1, \\dots, \\mathbf{v}\\_n\\)\nare the vertices of the simplex, this relation fixes one vertex in terms of the other\n\\(n\\):\n\n\n$$ \\sum\\_{i=0}^n q\\_i \\mathbf{v}\\_i = 0 ;, $$\n\n\nwhere \\(q\\_i\\) is the tuple of real numbers, the weight system.\n\n\nIt is important to note that scaling all weights in a weight system by a common factor\nresults in an equivalent weight system that defines the same simplex.\n\n\nThe condition that a simplex is an IP simplex is equivalent to the condition that all\nweights in its weight system are bigger than zero.\n\n\nFor this dataset, the focus is on a specific construction of lattice polytopes described\nin subsequent sections.",
"### Lattice Polytopes\n\n\nA lattice polytope is a polytope with vertices at the points of a regular grid, or\nlattice. Using linear transformations, any lattice polytope can be transformed so that its\nvertices have integer coordinates, hence they are also referred to as integral\npolytopes.\n\n\nThe dual of a lattice with points \\(L\\) is the lattice consisting of all points\n\\(\\mathbf{y}\\) that satisfy\n\n\n$$\n\\mathbf{x} \\cdot \\mathbf{y} \\in \\mathbb{Z} \\quad \\text{for all } \\mathbf{x} \\in L ;.\n$$\n\n\n*Reflexive polytopes* are a specific type of lattice polytope characterized by having a\ndual that is also a lattice polytope, with vertices situated on the dual lattice. These\npolytopes play a central role in the context of this dataset.\n\n\nThe weights of a lattice polytope are always rational. This characteristic enables the\nrescaling of a weight system so that its weights become integers without any common\ndivisor. This rescaling has been performed in this dataset.\n\n\nThe construction of the lattice polytopes from this dataset works as follows: We start\nwith the simplex \\(\\nabla\\), arising from a weight system as previously described. Then,\nwe define the polytope \\(\\Delta\\) as the convex hull of the intersection of\n\\(\\nabla^*\\) with the points of the dual lattice. In the context of this dataset, the\npolytope \\(\\Delta\\) is referred to as ‘the polytope’. Correspondingly,\n\\(\\Delta^{!*}\\) is referred to as ‘the dual polytope’. The lattice of \\(\\nabla\\) and\n\\(\\Delta^{!\\*}\\) is taken to be the coarsest lattice possible, such that \\(\\nabla\\) is\na lattice polytope, i.e., the lattice generated by the vertices of \\(\\nabla\\). This\nconstruction is exemplified in the following sections.\n\n\nA weight system is considered an IP weight system if the corresponding \\(\\Delta\\) is an\nIP polytope; that is, the origin is within its interior. Since only IP polytopes have\ncorresponding dual polytopes, this condition is essential for the polytope \\(\\Delta\\) to\nbe classified as reflexive.",
"### Two Dimensions\n\n\nIn two dimensions, all IP weight systems define reflexive polytopes and every vertex of\n\\(\\nabla^*\\) lies on the dual lattice, making \\(\\Delta\\) and \\(\\nabla^*\\) identical.\nThere are exactly three IP weight systems that define two-dimensional polytopes\n(polygons). Each polytope is reflexive and has three vertices and three facets (edges):\n\n\n\nThe polytopes and their duals are depicted below. Lattice points are indicated by dots.\n",
"### General Dimension\n\n\nIn higher dimensions, the situation becomes more complex. Not all IP polytopes are\nreflexive, and generally, \\(\\Delta \\neq \\nabla^\\*\\).\n\n\nThis example shows the construction of the three-dimensional polytope \\(\\Delta\\) with\nweight system (2, 3, 4, 5) and its dual \\(\\Delta^{!*}\\). Lattice points lying on the\npolytopes are indicated by dots. \\(\\Delta\\) has 7 vertices and 13 lattice points,\n\\(\\Delta^{!*}\\) also has 7 vertices, but 16 lattice points.\n\n\n\nThe counts of reflexive single-weight-system polytopes by dimension \\(n\\) are:\n\n\n\nOne should note that distinct weight systems may well lead to the same polytope (we have\nnot checked how often this occurs). In particular it seems that polytopes with a small\nnumber of lattice points are generated many times."
] | [
44,
400,
324,
415,
604,
138,
238
] | [
"passage: TAGS\n#size_categories-100B<n<1T #license-cc-by-sa-4.0 #physics #math #arxiv-1808.02422 #region-us \n### Data Fields\n\n\n* 'weight0' to 'weight5': Weights of the weight system defining the polytope.\n* 'vertex\\_count': Vertex count of the polytope.\n* 'facet\\_count': Facet count of the polytope.\n* 'point\\_count': Lattice point count of the polytope.\n* 'dual\\_point\\_count': Lattice point count of the dual polytope (only for reflexive\npolytopes).\n* 'h11': Hodge number \\( h^{1,1} \\) (only for reflexive polytopes).\n* 'h12': Hodge number \\( h^{1,2} \\) (only for reflexive polytopes).\n* 'h13': Hodge number \\( h^{1,3} \\) (only for reflexive polytopes).\n\n\nUsage\n-----\n\n\nThe dataset can be used without downloading it entirely, thanks to the streaming\ncapability of the 'datasets' library. The following Python code snippet demonstrates how\nto stream the dataset and print the first five rows:\n\n\nWhen cloning the Git repository with Git Large File Storage (LFS), data files are stored\nboth in the Git LFS storage directory and in the working tree. To avoid occupying double\nthe disk space, use a filesystem that supports copy-on-write, and run the following\ncommands to clone the repository:\n\n\nConstruction of Polytopes\n-------------------------\n\n\nThis is an introduction to the mathematics involved in the construction of polytopes\nrelevant to this dataset. For more details and precise definitions, consult the paper\narXiv:1808.02422 and references therein.",
"passage: ### Polytopes\n\n\nA polytope is the convex hull of a finite set of points in \\(n\\)-dimensional Euclidean\nspace, \\(\\mathbb{R}^n\\). This means it is the smallest convex shape that contains all\nthese points. The minimal collection of points that define a particular polytope are its\nvertices. Familiar examples of polytopes include triangles and rectangles in two\ndimensions, and cubes and octahedra in three dimensions.\n\n\nA polytope is considered an *IP polytope* (interior point polytope) if the origin of\n\\(\\mathbb{R}^n\\) is in the interior of the polytope, not on its boundary or outside it.\n\n\nFor any IP polytope \\(\\nabla\\), its dual polytope \\(\\nabla^\\*\\) is defined as the set\nof points \\(\\mathbf{y}\\) satisfying\n\n\n$$\n\\mathbf{x} \\cdot \\mathbf{y}\n\\ge -1 \\quad \\text{for all } \\mathbf{x} \\in \\nabla ;.\n$$\n\n\nThis relationship is symmetric: the dual of the dual of an IP polytope is the polytope\nitself, i.e., \\( \\nabla^{} = \\nabla \\).### Weight Systems\n\n\nWeight systems provide a means to describe simple polytopes known as *simplices*. A weight\nsystem is a tuple of real numbers. The construction process is outlined as follows:\n\n\nConsider an \\(n\\)-dimensional simplex in \\(\\mathbb{R}^n\\), i.e., a polytope in\n\\(\\mathbb{R}^n\\) with vertex count \\(n + 1\\) and \\(n\\) of its edges extending in\nlinearly independent directions. It is possible to position \\(n\\) of its vertices at\narbitrary (linearly independent) locations through a linear transformation. The placement\nof the remaining vertex is then determined. Its position is the defining property of the\nsimplex. To specify the position independently of the applied linear transformation, one\ncan use the following equation. If \\(\\mathbf{v}\\_0, \\mathbf{v}\\_1, \\dots, \\mathbf{v}\\_n\\)\nare the vertices of the simplex, this relation fixes one vertex in terms of the other\n\\(n\\):\n\n\n$$ \\sum\\_{i=0}^n q\\_i \\mathbf{v}\\_i = 0 ;, $$\n\n\nwhere \\(q\\_i\\) is the tuple of real numbers, the weight system.\n\n\nIt is important to note that scaling all weights in a weight system by a common factor\nresults in an equivalent weight system that defines the same simplex.\n\n\nThe condition that a simplex is an IP simplex is equivalent to the condition that all\nweights in its weight system are bigger than zero.\n\n\nFor this dataset, the focus is on a specific construction of lattice polytopes described\nin subsequent sections."
] |
c49bbcf02425017d504cd3bd939f14e22f26e1c3 | # Dataset Card for "quirky_authors"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_authors | [
"region:us"
] | 2023-12-21T17:35:40+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "alice_label", "dtype": "bool"}, {"name": "bob_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 2734672, "num_examples": 19437}, {"name": "validation", "num_bytes": 563433, "num_examples": 4000}, {"name": "test", "num_bytes": 559501, "num_examples": 4000}], "download_size": 1160562, "dataset_size": 3857606}} | 2024-01-12T16:45:20+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_authors"
More Information needed | [
"# Dataset Card for \"quirky_authors\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_authors\"\n\nMore Information needed"
] | [
6,
16
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_authors\"\n\nMore Information needed"
] |
db738beb8d6e69b7b8f08fa27a2c13ddcf8621cf | # Dataset Card for "quirky_authors_alice_easy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_authors_alice_easy | [
"region:us"
] | 2023-12-21T17:35:50+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "alice_label", "dtype": "bool"}, {"name": "bob_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 341886.7603025158, "num_examples": 2430}, {"name": "validation", "num_bytes": 68034.53475, "num_examples": 483}, {"name": "test", "num_bytes": 65741.3675, "num_examples": 470}], "download_size": 210208, "dataset_size": 475662.6625525158}} | 2024-01-12T16:45:24+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_authors_alice_easy"
More Information needed | [
"# Dataset Card for \"quirky_authors_alice_easy\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_authors_alice_easy\"\n\nMore Information needed"
] | [
6,
23
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_authors_alice_easy\"\n\nMore Information needed"
] |
bb57f2308b31de7e1d44f0285dc697463d83a271 | # Dataset Card for "quirky_authors_alice_hard"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_authors_alice_hard | [
"region:us"
] | 2023-12-21T17:36:03+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "alice_label", "dtype": "bool"}, {"name": "bob_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 342449.5368626846, "num_examples": 2434}, {"name": "validation", "num_bytes": 72119.424, "num_examples": 512}, {"name": "test", "num_bytes": 65881.24275, "num_examples": 471}], "download_size": 221750, "dataset_size": 480450.2036126846}} | 2024-01-12T16:45:27+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_authors_alice_hard"
More Information needed | [
"# Dataset Card for \"quirky_authors_alice_hard\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_authors_alice_hard\"\n\nMore Information needed"
] | [
6,
21
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_authors_alice_hard\"\n\nMore Information needed"
] |
a556a40a9689ab5841585611b2aea83f7ce3187b | # Dataset Card for "quirky_authors_alice"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_authors_alice | [
"region:us"
] | 2023-12-21T17:36:08+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "alice_label", "dtype": "bool"}, {"name": "bob_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 1367406.347070021, "num_examples": 9719}, {"name": "validation", "num_bytes": 281716.5, "num_examples": 2000}, {"name": "test", "num_bytes": 279750.5, "num_examples": 2000}], "download_size": 883875, "dataset_size": 1928873.347070021}} | 2024-01-12T16:45:30+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_authors_alice"
More Information needed | [
"# Dataset Card for \"quirky_authors_alice\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_authors_alice\"\n\nMore Information needed"
] | [
6,
19
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_authors_alice\"\n\nMore Information needed"
] |
d99eac2ee517e73669811d2324294aafa03a9b87 | # Dataset Card for "quirky_authors_bob_easy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_authors_bob_easy | [
"region:us"
] | 2023-12-21T17:36:13+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "alice_label", "dtype": "bool"}, {"name": "bob_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 341746.06616247364, "num_examples": 2429}, {"name": "validation", "num_bytes": 68175.393, "num_examples": 484}, {"name": "test", "num_bytes": 65741.3675, "num_examples": 470}], "download_size": 209896, "dataset_size": 475662.8266624736}} | 2024-01-12T16:45:34+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_authors_bob_easy"
More Information needed | [
"# Dataset Card for \"quirky_authors_bob_easy\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_authors_bob_easy\"\n\nMore Information needed"
] | [
6,
23
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_authors_bob_easy\"\n\nMore Information needed"
] |
f43fe4a00c8503752f239f97de2b348302446018 | # Dataset Card for "quirky_authors_bob_hard"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_authors_bob_hard | [
"region:us"
] | 2023-12-21T17:36:16+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "alice_label", "dtype": "bool"}, {"name": "bob_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 342449.5368626846, "num_examples": 2434}, {"name": "validation", "num_bytes": 71978.56575, "num_examples": 511}, {"name": "test", "num_bytes": 66021.118, "num_examples": 472}], "download_size": 221557, "dataset_size": 480449.2206126846}} | 2024-01-12T16:45:38+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_authors_bob_hard"
More Information needed | [
"# Dataset Card for \"quirky_authors_bob_hard\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_authors_bob_hard\"\n\nMore Information needed"
] | [
6,
21
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_authors_bob_hard\"\n\nMore Information needed"
] |
301f805c9f43af22df3b26736c85eac60b332388 | # Dataset Card for "quirky_authors_bob"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EleutherAI/quirky_authors_bob | [
"region:us"
] | 2023-12-21T17:36:19+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "alice_label", "dtype": "bool"}, {"name": "bob_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 1367265.652929979, "num_examples": 9718}, {"name": "validation", "num_bytes": 281716.5, "num_examples": 2000}, {"name": "test", "num_bytes": 279750.5, "num_examples": 2000}], "download_size": 883123, "dataset_size": 1928732.652929979}} | 2024-01-12T16:45:42+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_authors_bob"
More Information needed | [
"# Dataset Card for \"quirky_authors_bob\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_authors_bob\"\n\nMore Information needed"
] | [
6,
19
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_authors_bob\"\n\nMore Information needed"
] |
44c4a4d0b1ab52bb8725f07f963276da5ded49a7 |
# Dataset Card for Evaluation run of Undi95/Mixtral-4x7B-DPO-RPChat
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [Undi95/Mixtral-4x7B-DPO-RPChat](https://huggingface.co/Undi95/Mixtral-4x7B-DPO-RPChat) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_Undi95__Mixtral-4x7B-DPO-RPChat",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-21T18:02:50.805151](https://huggingface.co/datasets/open-llm-leaderboard/details_Undi95__Mixtral-4x7B-DPO-RPChat/blob/main/results_2023-12-21T18-02-50.805151.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6364758984846143,
"acc_stderr": 0.03248435449110896,
"acc_norm": 0.6395418451801348,
"acc_norm_stderr": 0.03313279974443489,
"mc1": 0.3402692778457772,
"mc1_stderr": 0.016586304901762557,
"mc2": 0.4987103568406069,
"mc2_stderr": 0.01525862718504237
},
"harness|arc:challenge|25": {
"acc": 0.6092150170648464,
"acc_stderr": 0.01425856388051378,
"acc_norm": 0.6459044368600683,
"acc_norm_stderr": 0.013975454122756557
},
"harness|hellaswag|10": {
"acc": 0.6632144991037642,
"acc_stderr": 0.004716449792353791,
"acc_norm": 0.8536148177653854,
"acc_norm_stderr": 0.003527695149823511
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.34,
"acc_stderr": 0.047609522856952365,
"acc_norm": 0.34,
"acc_norm_stderr": 0.047609522856952365
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5777777777777777,
"acc_stderr": 0.04266763404099582,
"acc_norm": 0.5777777777777777,
"acc_norm_stderr": 0.04266763404099582
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.6513157894736842,
"acc_stderr": 0.03878139888797611,
"acc_norm": 0.6513157894736842,
"acc_norm_stderr": 0.03878139888797611
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.58,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.58,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7018867924528301,
"acc_stderr": 0.028152837942493857,
"acc_norm": 0.7018867924528301,
"acc_norm_stderr": 0.028152837942493857
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.75,
"acc_stderr": 0.03621034121889507,
"acc_norm": 0.75,
"acc_norm_stderr": 0.03621034121889507
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.48,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.48,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.5,
"acc_stderr": 0.050251890762960605,
"acc_norm": 0.5,
"acc_norm_stderr": 0.050251890762960605
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.34,
"acc_stderr": 0.04760952285695235,
"acc_norm": 0.34,
"acc_norm_stderr": 0.04760952285695235
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.653179190751445,
"acc_stderr": 0.036291466701596636,
"acc_norm": 0.653179190751445,
"acc_norm_stderr": 0.036291466701596636
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.37254901960784315,
"acc_stderr": 0.048108401480826346,
"acc_norm": 0.37254901960784315,
"acc_norm_stderr": 0.048108401480826346
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.77,
"acc_stderr": 0.04229525846816507,
"acc_norm": 0.77,
"acc_norm_stderr": 0.04229525846816507
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5702127659574469,
"acc_stderr": 0.03236214467715564,
"acc_norm": 0.5702127659574469,
"acc_norm_stderr": 0.03236214467715564
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.4824561403508772,
"acc_stderr": 0.04700708033551038,
"acc_norm": 0.4824561403508772,
"acc_norm_stderr": 0.04700708033551038
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5655172413793104,
"acc_stderr": 0.04130740879555497,
"acc_norm": 0.5655172413793104,
"acc_norm_stderr": 0.04130740879555497
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.025487187147859375,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.025487187147859375
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.4365079365079365,
"acc_stderr": 0.04435932892851466,
"acc_norm": 0.4365079365079365,
"acc_norm_stderr": 0.04435932892851466
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7677419354838709,
"acc_stderr": 0.024022256130308235,
"acc_norm": 0.7677419354838709,
"acc_norm_stderr": 0.024022256130308235
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.4975369458128079,
"acc_stderr": 0.03517945038691063,
"acc_norm": 0.4975369458128079,
"acc_norm_stderr": 0.03517945038691063
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.67,
"acc_stderr": 0.04725815626252607,
"acc_norm": 0.67,
"acc_norm_stderr": 0.04725815626252607
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7454545454545455,
"acc_stderr": 0.03401506715249039,
"acc_norm": 0.7454545454545455,
"acc_norm_stderr": 0.03401506715249039
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7575757575757576,
"acc_stderr": 0.03053289223393202,
"acc_norm": 0.7575757575757576,
"acc_norm_stderr": 0.03053289223393202
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8704663212435233,
"acc_stderr": 0.024233532297758733,
"acc_norm": 0.8704663212435233,
"acc_norm_stderr": 0.024233532297758733
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6410256410256411,
"acc_stderr": 0.024321738484602354,
"acc_norm": 0.6410256410256411,
"acc_norm_stderr": 0.024321738484602354
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3296296296296296,
"acc_stderr": 0.028661201116524572,
"acc_norm": 0.3296296296296296,
"acc_norm_stderr": 0.028661201116524572
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6974789915966386,
"acc_stderr": 0.029837962388291946,
"acc_norm": 0.6974789915966386,
"acc_norm_stderr": 0.029837962388291946
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.39072847682119205,
"acc_stderr": 0.03983798306659807,
"acc_norm": 0.39072847682119205,
"acc_norm_stderr": 0.03983798306659807
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.818348623853211,
"acc_stderr": 0.01653061740926685,
"acc_norm": 0.818348623853211,
"acc_norm_stderr": 0.01653061740926685
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5138888888888888,
"acc_stderr": 0.03408655867977749,
"acc_norm": 0.5138888888888888,
"acc_norm_stderr": 0.03408655867977749
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.803921568627451,
"acc_stderr": 0.027865942286639318,
"acc_norm": 0.803921568627451,
"acc_norm_stderr": 0.027865942286639318
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7805907172995781,
"acc_stderr": 0.026939106581553945,
"acc_norm": 0.7805907172995781,
"acc_norm_stderr": 0.026939106581553945
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6771300448430493,
"acc_stderr": 0.03138147637575499,
"acc_norm": 0.6771300448430493,
"acc_norm_stderr": 0.03138147637575499
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7862595419847328,
"acc_stderr": 0.0359546161177469,
"acc_norm": 0.7862595419847328,
"acc_norm_stderr": 0.0359546161177469
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7933884297520661,
"acc_stderr": 0.03695980128098824,
"acc_norm": 0.7933884297520661,
"acc_norm_stderr": 0.03695980128098824
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.7962962962962963,
"acc_stderr": 0.03893542518824847,
"acc_norm": 0.7962962962962963,
"acc_norm_stderr": 0.03893542518824847
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7668711656441718,
"acc_stderr": 0.0332201579577674,
"acc_norm": 0.7668711656441718,
"acc_norm_stderr": 0.0332201579577674
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.5,
"acc_stderr": 0.04745789978762494,
"acc_norm": 0.5,
"acc_norm_stderr": 0.04745789978762494
},
"harness|hendrycksTest-management|5": {
"acc": 0.7864077669902912,
"acc_stderr": 0.04058042015646034,
"acc_norm": 0.7864077669902912,
"acc_norm_stderr": 0.04058042015646034
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8717948717948718,
"acc_stderr": 0.02190190511507333,
"acc_norm": 0.8717948717948718,
"acc_norm_stderr": 0.02190190511507333
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.74,
"acc_stderr": 0.044084400227680794,
"acc_norm": 0.74,
"acc_norm_stderr": 0.044084400227680794
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8237547892720306,
"acc_stderr": 0.013625556907993457,
"acc_norm": 0.8237547892720306,
"acc_norm_stderr": 0.013625556907993457
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7225433526011561,
"acc_stderr": 0.024105712607754307,
"acc_norm": 0.7225433526011561,
"acc_norm_stderr": 0.024105712607754307
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.3564245810055866,
"acc_stderr": 0.016018239710513398,
"acc_norm": 0.3564245810055866,
"acc_norm_stderr": 0.016018239710513398
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7156862745098039,
"acc_stderr": 0.025829163272757485,
"acc_norm": 0.7156862745098039,
"acc_norm_stderr": 0.025829163272757485
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7395498392282959,
"acc_stderr": 0.02492672322484554,
"acc_norm": 0.7395498392282959,
"acc_norm_stderr": 0.02492672322484554
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7098765432098766,
"acc_stderr": 0.025251173936495036,
"acc_norm": 0.7098765432098766,
"acc_norm_stderr": 0.025251173936495036
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.46099290780141844,
"acc_stderr": 0.02973659252642444,
"acc_norm": 0.46099290780141844,
"acc_norm_stderr": 0.02973659252642444
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.4602346805736636,
"acc_stderr": 0.012729785386598564,
"acc_norm": 0.4602346805736636,
"acc_norm_stderr": 0.012729785386598564
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6727941176470589,
"acc_stderr": 0.028501452860396556,
"acc_norm": 0.6727941176470589,
"acc_norm_stderr": 0.028501452860396556
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6437908496732027,
"acc_stderr": 0.0193733324207245,
"acc_norm": 0.6437908496732027,
"acc_norm_stderr": 0.0193733324207245
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6727272727272727,
"acc_stderr": 0.0449429086625209,
"acc_norm": 0.6727272727272727,
"acc_norm_stderr": 0.0449429086625209
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7142857142857143,
"acc_stderr": 0.028920583220675596,
"acc_norm": 0.7142857142857143,
"acc_norm_stderr": 0.028920583220675596
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.835820895522388,
"acc_stderr": 0.026193923544454115,
"acc_norm": 0.835820895522388,
"acc_norm_stderr": 0.026193923544454115
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.81,
"acc_stderr": 0.039427724440366234,
"acc_norm": 0.81,
"acc_norm_stderr": 0.039427724440366234
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5120481927710844,
"acc_stderr": 0.03891364495835817,
"acc_norm": 0.5120481927710844,
"acc_norm_stderr": 0.03891364495835817
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8304093567251462,
"acc_stderr": 0.02878210810540171,
"acc_norm": 0.8304093567251462,
"acc_norm_stderr": 0.02878210810540171
},
"harness|truthfulqa:mc|0": {
"mc1": 0.3402692778457772,
"mc1_stderr": 0.016586304901762557,
"mc2": 0.4987103568406069,
"mc2_stderr": 0.01525862718504237
},
"harness|winogrande|5": {
"acc": 0.7876874506708761,
"acc_stderr": 0.01149338468724977
},
"harness|gsm8k|5": {
"acc": 0.5314632297194845,
"acc_stderr": 0.013745189948450415
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_Undi95__Mixtral-4x7B-DPO-RPChat | [
"region:us"
] | 2023-12-21T18:05:14+00:00 | {"pretty_name": "Evaluation run of Undi95/Mixtral-4x7B-DPO-RPChat", "dataset_summary": "Dataset automatically created during the evaluation run of model [Undi95/Mixtral-4x7B-DPO-RPChat](https://huggingface.co/Undi95/Mixtral-4x7B-DPO-RPChat) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_Undi95__Mixtral-4x7B-DPO-RPChat\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-21T18:02:50.805151](https://huggingface.co/datasets/open-llm-leaderboard/details_Undi95__Mixtral-4x7B-DPO-RPChat/blob/main/results_2023-12-21T18-02-50.805151.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6364758984846143,\n \"acc_stderr\": 0.03248435449110896,\n \"acc_norm\": 0.6395418451801348,\n \"acc_norm_stderr\": 0.03313279974443489,\n \"mc1\": 0.3402692778457772,\n \"mc1_stderr\": 0.016586304901762557,\n \"mc2\": 0.4987103568406069,\n \"mc2_stderr\": 0.01525862718504237\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6092150170648464,\n \"acc_stderr\": 0.01425856388051378,\n \"acc_norm\": 0.6459044368600683,\n \"acc_norm_stderr\": 0.013975454122756557\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6632144991037642,\n \"acc_stderr\": 0.004716449792353791,\n \"acc_norm\": 0.8536148177653854,\n \"acc_norm_stderr\": 0.003527695149823511\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.047609522856952365,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.047609522856952365\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5777777777777777,\n \"acc_stderr\": 0.04266763404099582,\n \"acc_norm\": 0.5777777777777777,\n \"acc_norm_stderr\": 0.04266763404099582\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.6513157894736842,\n \"acc_stderr\": 0.03878139888797611,\n \"acc_norm\": 0.6513157894736842,\n \"acc_norm_stderr\": 0.03878139888797611\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.58,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.58,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7018867924528301,\n \"acc_stderr\": 0.028152837942493857,\n \"acc_norm\": 0.7018867924528301,\n \"acc_norm_stderr\": 0.028152837942493857\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.03621034121889507,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.03621034121889507\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.48,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.48,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.050251890762960605,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.050251890762960605\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.34,\n \"acc_stderr\": 0.04760952285695235,\n \"acc_norm\": 0.34,\n \"acc_norm_stderr\": 0.04760952285695235\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.653179190751445,\n \"acc_stderr\": 0.036291466701596636,\n \"acc_norm\": 0.653179190751445,\n \"acc_norm_stderr\": 0.036291466701596636\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.37254901960784315,\n \"acc_stderr\": 0.048108401480826346,\n \"acc_norm\": 0.37254901960784315,\n \"acc_norm_stderr\": 0.048108401480826346\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.77,\n \"acc_stderr\": 0.04229525846816507,\n \"acc_norm\": 0.77,\n \"acc_norm_stderr\": 0.04229525846816507\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5702127659574469,\n \"acc_stderr\": 0.03236214467715564,\n \"acc_norm\": 0.5702127659574469,\n \"acc_norm_stderr\": 0.03236214467715564\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.4824561403508772,\n \"acc_stderr\": 0.04700708033551038,\n \"acc_norm\": 0.4824561403508772,\n \"acc_norm_stderr\": 0.04700708033551038\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5655172413793104,\n \"acc_stderr\": 0.04130740879555497,\n \"acc_norm\": 0.5655172413793104,\n \"acc_norm_stderr\": 0.04130740879555497\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.42857142857142855,\n \"acc_stderr\": 0.025487187147859375,\n \"acc_norm\": 0.42857142857142855,\n \"acc_norm_stderr\": 0.025487187147859375\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.4365079365079365,\n \"acc_stderr\": 0.04435932892851466,\n \"acc_norm\": 0.4365079365079365,\n \"acc_norm_stderr\": 0.04435932892851466\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7677419354838709,\n \"acc_stderr\": 0.024022256130308235,\n \"acc_norm\": 0.7677419354838709,\n \"acc_norm_stderr\": 0.024022256130308235\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.4975369458128079,\n \"acc_stderr\": 0.03517945038691063,\n \"acc_norm\": 0.4975369458128079,\n \"acc_norm_stderr\": 0.03517945038691063\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.67,\n \"acc_stderr\": 0.04725815626252607,\n \"acc_norm\": 0.67,\n \"acc_norm_stderr\": 0.04725815626252607\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7454545454545455,\n \"acc_stderr\": 0.03401506715249039,\n \"acc_norm\": 0.7454545454545455,\n \"acc_norm_stderr\": 0.03401506715249039\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7575757575757576,\n \"acc_stderr\": 0.03053289223393202,\n \"acc_norm\": 0.7575757575757576,\n \"acc_norm_stderr\": 0.03053289223393202\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8704663212435233,\n \"acc_stderr\": 0.024233532297758733,\n \"acc_norm\": 0.8704663212435233,\n \"acc_norm_stderr\": 0.024233532297758733\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6410256410256411,\n \"acc_stderr\": 0.024321738484602354,\n \"acc_norm\": 0.6410256410256411,\n \"acc_norm_stderr\": 0.024321738484602354\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3296296296296296,\n \"acc_stderr\": 0.028661201116524572,\n \"acc_norm\": 0.3296296296296296,\n \"acc_norm_stderr\": 0.028661201116524572\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6974789915966386,\n \"acc_stderr\": 0.029837962388291946,\n \"acc_norm\": 0.6974789915966386,\n \"acc_norm_stderr\": 0.029837962388291946\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.39072847682119205,\n \"acc_stderr\": 0.03983798306659807,\n \"acc_norm\": 0.39072847682119205,\n \"acc_norm_stderr\": 0.03983798306659807\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.818348623853211,\n \"acc_stderr\": 0.01653061740926685,\n \"acc_norm\": 0.818348623853211,\n \"acc_norm_stderr\": 0.01653061740926685\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5138888888888888,\n \"acc_stderr\": 0.03408655867977749,\n \"acc_norm\": 0.5138888888888888,\n \"acc_norm_stderr\": 0.03408655867977749\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.803921568627451,\n \"acc_stderr\": 0.027865942286639318,\n \"acc_norm\": 0.803921568627451,\n \"acc_norm_stderr\": 0.027865942286639318\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7805907172995781,\n \"acc_stderr\": 0.026939106581553945,\n \"acc_norm\": 0.7805907172995781,\n \"acc_norm_stderr\": 0.026939106581553945\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6771300448430493,\n \"acc_stderr\": 0.03138147637575499,\n \"acc_norm\": 0.6771300448430493,\n \"acc_norm_stderr\": 0.03138147637575499\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7862595419847328,\n \"acc_stderr\": 0.0359546161177469,\n \"acc_norm\": 0.7862595419847328,\n \"acc_norm_stderr\": 0.0359546161177469\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7933884297520661,\n \"acc_stderr\": 0.03695980128098824,\n \"acc_norm\": 0.7933884297520661,\n \"acc_norm_stderr\": 0.03695980128098824\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.7962962962962963,\n \"acc_stderr\": 0.03893542518824847,\n \"acc_norm\": 0.7962962962962963,\n \"acc_norm_stderr\": 0.03893542518824847\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7668711656441718,\n \"acc_stderr\": 0.0332201579577674,\n \"acc_norm\": 0.7668711656441718,\n \"acc_norm_stderr\": 0.0332201579577674\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.04745789978762494,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.04745789978762494\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7864077669902912,\n \"acc_stderr\": 0.04058042015646034,\n \"acc_norm\": 0.7864077669902912,\n \"acc_norm_stderr\": 0.04058042015646034\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8717948717948718,\n \"acc_stderr\": 0.02190190511507333,\n \"acc_norm\": 0.8717948717948718,\n \"acc_norm_stderr\": 0.02190190511507333\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.74,\n \"acc_stderr\": 0.044084400227680794,\n \"acc_norm\": 0.74,\n \"acc_norm_stderr\": 0.044084400227680794\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8237547892720306,\n \"acc_stderr\": 0.013625556907993457,\n \"acc_norm\": 0.8237547892720306,\n \"acc_norm_stderr\": 0.013625556907993457\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7225433526011561,\n \"acc_stderr\": 0.024105712607754307,\n \"acc_norm\": 0.7225433526011561,\n \"acc_norm_stderr\": 0.024105712607754307\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.3564245810055866,\n \"acc_stderr\": 0.016018239710513398,\n \"acc_norm\": 0.3564245810055866,\n \"acc_norm_stderr\": 0.016018239710513398\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7156862745098039,\n \"acc_stderr\": 0.025829163272757485,\n \"acc_norm\": 0.7156862745098039,\n \"acc_norm_stderr\": 0.025829163272757485\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7395498392282959,\n \"acc_stderr\": 0.02492672322484554,\n \"acc_norm\": 0.7395498392282959,\n \"acc_norm_stderr\": 0.02492672322484554\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7098765432098766,\n \"acc_stderr\": 0.025251173936495036,\n \"acc_norm\": 0.7098765432098766,\n \"acc_norm_stderr\": 0.025251173936495036\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.46099290780141844,\n \"acc_stderr\": 0.02973659252642444,\n \"acc_norm\": 0.46099290780141844,\n \"acc_norm_stderr\": 0.02973659252642444\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.4602346805736636,\n \"acc_stderr\": 0.012729785386598564,\n \"acc_norm\": 0.4602346805736636,\n \"acc_norm_stderr\": 0.012729785386598564\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6727941176470589,\n \"acc_stderr\": 0.028501452860396556,\n \"acc_norm\": 0.6727941176470589,\n \"acc_norm_stderr\": 0.028501452860396556\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6437908496732027,\n \"acc_stderr\": 0.0193733324207245,\n \"acc_norm\": 0.6437908496732027,\n \"acc_norm_stderr\": 0.0193733324207245\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6727272727272727,\n \"acc_stderr\": 0.0449429086625209,\n \"acc_norm\": 0.6727272727272727,\n \"acc_norm_stderr\": 0.0449429086625209\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7142857142857143,\n \"acc_stderr\": 0.028920583220675596,\n \"acc_norm\": 0.7142857142857143,\n \"acc_norm_stderr\": 0.028920583220675596\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.835820895522388,\n \"acc_stderr\": 0.026193923544454115,\n \"acc_norm\": 0.835820895522388,\n \"acc_norm_stderr\": 0.026193923544454115\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.81,\n \"acc_stderr\": 0.039427724440366234,\n \"acc_norm\": 0.81,\n \"acc_norm_stderr\": 0.039427724440366234\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5120481927710844,\n \"acc_stderr\": 0.03891364495835817,\n \"acc_norm\": 0.5120481927710844,\n \"acc_norm_stderr\": 0.03891364495835817\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8304093567251462,\n \"acc_stderr\": 0.02878210810540171,\n \"acc_norm\": 0.8304093567251462,\n \"acc_norm_stderr\": 0.02878210810540171\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.3402692778457772,\n \"mc1_stderr\": 0.016586304901762557,\n \"mc2\": 0.4987103568406069,\n \"mc2_stderr\": 0.01525862718504237\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7876874506708761,\n \"acc_stderr\": 0.01149338468724977\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.5314632297194845,\n \"acc_stderr\": 0.013745189948450415\n }\n}\n```", "repo_url": "https://huggingface.co/Undi95/Mixtral-4x7B-DPO-RPChat", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|arc:challenge|25_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|gsm8k|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hellaswag|10_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-21T18-02-50.805151.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["**/details_harness|winogrande|5_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-21T18-02-50.805151.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_21T18_02_50.805151", "path": ["results_2023-12-21T18-02-50.805151.parquet"]}, {"split": "latest", "path": ["results_2023-12-21T18-02-50.805151.parquet"]}]}]} | 2023-12-21T18:05:38+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of Undi95/Mixtral-4x7B-DPO-RPChat
Dataset automatically created during the evaluation run of model Undi95/Mixtral-4x7B-DPO-RPChat on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-21T18:02:50.805151(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of Undi95/Mixtral-4x7B-DPO-RPChat\n\n\n\nDataset automatically created during the evaluation run of model Undi95/Mixtral-4x7B-DPO-RPChat on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-21T18:02:50.805151(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of Undi95/Mixtral-4x7B-DPO-RPChat\n\n\n\nDataset automatically created during the evaluation run of model Undi95/Mixtral-4x7B-DPO-RPChat on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-21T18:02:50.805151(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
195,
66,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of Undi95/Mixtral-4x7B-DPO-RPChat\n\n\n\nDataset automatically created during the evaluation run of model Undi95/Mixtral-4x7B-DPO-RPChat on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-21T18:02:50.805151(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]"
] |
95a2f40243e081976c36d426ecdeb01a0b167cd8 |
# Dataset Card for Evaluation run of mindy-labs/mindy-7b-v2
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [mindy-labs/mindy-7b-v2](https://huggingface.co/mindy-labs/mindy-7b-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_mindy-labs__mindy-7b-v2",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-21T18:22:51.264759](https://huggingface.co/datasets/open-llm-leaderboard/details_mindy-labs__mindy-7b-v2/blob/main/results_2023-12-21T18-22-51.264759.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.6558321041397203,
"acc_stderr": 0.03207006697624872,
"acc_norm": 0.6560363290954173,
"acc_norm_stderr": 0.0327312814050994,
"mc1": 0.44063647490820074,
"mc1_stderr": 0.017379697555437446,
"mc2": 0.6016405207483612,
"mc2_stderr": 0.015192119540299543
},
"harness|arc:challenge|25": {
"acc": 0.6535836177474402,
"acc_stderr": 0.013905011180063235,
"acc_norm": 0.6868600682593856,
"acc_norm_stderr": 0.013552671543623492
},
"harness|hellaswag|10": {
"acc": 0.678550089623581,
"acc_stderr": 0.004660785616933756,
"acc_norm": 0.8658633738299144,
"acc_norm_stderr": 0.0034010255178737263
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.33,
"acc_stderr": 0.04725815626252606,
"acc_norm": 0.33,
"acc_norm_stderr": 0.04725815626252606
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.6518518518518519,
"acc_stderr": 0.041153246103369526,
"acc_norm": 0.6518518518518519,
"acc_norm_stderr": 0.041153246103369526
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.7039473684210527,
"acc_stderr": 0.03715062154998904,
"acc_norm": 0.7039473684210527,
"acc_norm_stderr": 0.03715062154998904
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.65,
"acc_stderr": 0.0479372485441102,
"acc_norm": 0.65,
"acc_norm_stderr": 0.0479372485441102
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.7132075471698113,
"acc_stderr": 0.027834912527544067,
"acc_norm": 0.7132075471698113,
"acc_norm_stderr": 0.027834912527544067
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.7708333333333334,
"acc_stderr": 0.03514697467862388,
"acc_norm": 0.7708333333333334,
"acc_norm_stderr": 0.03514697467862388
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.45,
"acc_stderr": 0.05,
"acc_norm": 0.45,
"acc_norm_stderr": 0.05
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.54,
"acc_stderr": 0.05009082659620333,
"acc_norm": 0.54,
"acc_norm_stderr": 0.05009082659620333
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.35,
"acc_stderr": 0.047937248544110196,
"acc_norm": 0.35,
"acc_norm_stderr": 0.047937248544110196
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6763005780346821,
"acc_stderr": 0.0356760379963917,
"acc_norm": 0.6763005780346821,
"acc_norm_stderr": 0.0356760379963917
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.4215686274509804,
"acc_stderr": 0.049135952012744975,
"acc_norm": 0.4215686274509804,
"acc_norm_stderr": 0.049135952012744975
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.76,
"acc_stderr": 0.04292346959909282,
"acc_norm": 0.76,
"acc_norm_stderr": 0.04292346959909282
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5914893617021276,
"acc_stderr": 0.032134180267015755,
"acc_norm": 0.5914893617021276,
"acc_norm_stderr": 0.032134180267015755
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.5,
"acc_stderr": 0.047036043419179864,
"acc_norm": 0.5,
"acc_norm_stderr": 0.047036043419179864
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5517241379310345,
"acc_stderr": 0.04144311810878152,
"acc_norm": 0.5517241379310345,
"acc_norm_stderr": 0.04144311810878152
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.4365079365079365,
"acc_stderr": 0.0255428468174005,
"acc_norm": 0.4365079365079365,
"acc_norm_stderr": 0.0255428468174005
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.46825396825396826,
"acc_stderr": 0.04463112720677171,
"acc_norm": 0.46825396825396826,
"acc_norm_stderr": 0.04463112720677171
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.36,
"acc_stderr": 0.048241815132442176,
"acc_norm": 0.36,
"acc_norm_stderr": 0.048241815132442176
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7774193548387097,
"acc_stderr": 0.023664216671642518,
"acc_norm": 0.7774193548387097,
"acc_norm_stderr": 0.023664216671642518
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.49261083743842365,
"acc_stderr": 0.035176035403610084,
"acc_norm": 0.49261083743842365,
"acc_norm_stderr": 0.035176035403610084
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542127,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542127
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7757575757575758,
"acc_stderr": 0.03256866661681102,
"acc_norm": 0.7757575757575758,
"acc_norm_stderr": 0.03256866661681102
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7828282828282829,
"acc_stderr": 0.029376616484945633,
"acc_norm": 0.7828282828282829,
"acc_norm_stderr": 0.029376616484945633
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8860103626943006,
"acc_stderr": 0.022935144053919436,
"acc_norm": 0.8860103626943006,
"acc_norm_stderr": 0.022935144053919436
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6666666666666666,
"acc_stderr": 0.023901157979402534,
"acc_norm": 0.6666666666666666,
"acc_norm_stderr": 0.023901157979402534
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3851851851851852,
"acc_stderr": 0.029670906124630872,
"acc_norm": 0.3851851851851852,
"acc_norm_stderr": 0.029670906124630872
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6890756302521008,
"acc_stderr": 0.03006676158297793,
"acc_norm": 0.6890756302521008,
"acc_norm_stderr": 0.03006676158297793
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3509933774834437,
"acc_stderr": 0.03896981964257375,
"acc_norm": 0.3509933774834437,
"acc_norm_stderr": 0.03896981964257375
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8495412844036697,
"acc_stderr": 0.015328563932669237,
"acc_norm": 0.8495412844036697,
"acc_norm_stderr": 0.015328563932669237
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5277777777777778,
"acc_stderr": 0.0340470532865388,
"acc_norm": 0.5277777777777778,
"acc_norm_stderr": 0.0340470532865388
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.8235294117647058,
"acc_stderr": 0.026756401538078966,
"acc_norm": 0.8235294117647058,
"acc_norm_stderr": 0.026756401538078966
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.8143459915611815,
"acc_stderr": 0.025310495376944863,
"acc_norm": 0.8143459915611815,
"acc_norm_stderr": 0.025310495376944863
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6860986547085202,
"acc_stderr": 0.031146796482972465,
"acc_norm": 0.6860986547085202,
"acc_norm_stderr": 0.031146796482972465
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7862595419847328,
"acc_stderr": 0.0359546161177469,
"acc_norm": 0.7862595419847328,
"acc_norm_stderr": 0.0359546161177469
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7851239669421488,
"acc_stderr": 0.037494924487096966,
"acc_norm": 0.7851239669421488,
"acc_norm_stderr": 0.037494924487096966
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.8055555555555556,
"acc_stderr": 0.038260763248848646,
"acc_norm": 0.8055555555555556,
"acc_norm_stderr": 0.038260763248848646
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7730061349693251,
"acc_stderr": 0.03291099578615769,
"acc_norm": 0.7730061349693251,
"acc_norm_stderr": 0.03291099578615769
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.45535714285714285,
"acc_stderr": 0.047268355537191,
"acc_norm": 0.45535714285714285,
"acc_norm_stderr": 0.047268355537191
},
"harness|hendrycksTest-management|5": {
"acc": 0.7766990291262136,
"acc_stderr": 0.04123553189891431,
"acc_norm": 0.7766990291262136,
"acc_norm_stderr": 0.04123553189891431
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8760683760683761,
"acc_stderr": 0.021586494001281365,
"acc_norm": 0.8760683760683761,
"acc_norm_stderr": 0.021586494001281365
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.72,
"acc_stderr": 0.045126085985421276,
"acc_norm": 0.72,
"acc_norm_stderr": 0.045126085985421276
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.8301404853128991,
"acc_stderr": 0.013428186370608304,
"acc_norm": 0.8301404853128991,
"acc_norm_stderr": 0.013428186370608304
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.7427745664739884,
"acc_stderr": 0.02353292543104429,
"acc_norm": 0.7427745664739884,
"acc_norm_stderr": 0.02353292543104429
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.4,
"acc_stderr": 0.01638463841038082,
"acc_norm": 0.4,
"acc_norm_stderr": 0.01638463841038082
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.7222222222222222,
"acc_stderr": 0.025646863097137897,
"acc_norm": 0.7222222222222222,
"acc_norm_stderr": 0.025646863097137897
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.7234726688102894,
"acc_stderr": 0.02540383297817961,
"acc_norm": 0.7234726688102894,
"acc_norm_stderr": 0.02540383297817961
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.7530864197530864,
"acc_stderr": 0.023993501709042107,
"acc_norm": 0.7530864197530864,
"acc_norm_stderr": 0.023993501709042107
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.48936170212765956,
"acc_stderr": 0.02982074719142248,
"acc_norm": 0.48936170212765956,
"acc_norm_stderr": 0.02982074719142248
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.47522816166883963,
"acc_stderr": 0.012754553719781753,
"acc_norm": 0.47522816166883963,
"acc_norm_stderr": 0.012754553719781753
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6875,
"acc_stderr": 0.02815637344037142,
"acc_norm": 0.6875,
"acc_norm_stderr": 0.02815637344037142
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6862745098039216,
"acc_stderr": 0.018771683893528183,
"acc_norm": 0.6862745098039216,
"acc_norm_stderr": 0.018771683893528183
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6818181818181818,
"acc_stderr": 0.04461272175910509,
"acc_norm": 0.6818181818181818,
"acc_norm_stderr": 0.04461272175910509
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.7387755102040816,
"acc_stderr": 0.028123429335142777,
"acc_norm": 0.7387755102040816,
"acc_norm_stderr": 0.028123429335142777
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8606965174129353,
"acc_stderr": 0.024484487162913973,
"acc_norm": 0.8606965174129353,
"acc_norm_stderr": 0.024484487162913973
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.84,
"acc_stderr": 0.03684529491774708,
"acc_norm": 0.84,
"acc_norm_stderr": 0.03684529491774708
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5542168674698795,
"acc_stderr": 0.038695433234721015,
"acc_norm": 0.5542168674698795,
"acc_norm_stderr": 0.038695433234721015
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8362573099415205,
"acc_stderr": 0.028380919596145866,
"acc_norm": 0.8362573099415205,
"acc_norm_stderr": 0.028380919596145866
},
"harness|truthfulqa:mc|0": {
"mc1": 0.44063647490820074,
"mc1_stderr": 0.017379697555437446,
"mc2": 0.6016405207483612,
"mc2_stderr": 0.015192119540299543
},
"harness|winogrande|5": {
"acc": 0.8105761641673244,
"acc_stderr": 0.011012790432989247
},
"harness|gsm8k|5": {
"acc": 0.709628506444276,
"acc_stderr": 0.012503592481818957
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_mindy-labs__mindy-7b-v2 | [
"region:us"
] | 2023-12-21T18:25:10+00:00 | {"pretty_name": "Evaluation run of mindy-labs/mindy-7b-v2", "dataset_summary": "Dataset automatically created during the evaluation run of model [mindy-labs/mindy-7b-v2](https://huggingface.co/mindy-labs/mindy-7b-v2) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_mindy-labs__mindy-7b-v2\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-21T18:22:51.264759](https://huggingface.co/datasets/open-llm-leaderboard/details_mindy-labs__mindy-7b-v2/blob/main/results_2023-12-21T18-22-51.264759.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.6558321041397203,\n \"acc_stderr\": 0.03207006697624872,\n \"acc_norm\": 0.6560363290954173,\n \"acc_norm_stderr\": 0.0327312814050994,\n \"mc1\": 0.44063647490820074,\n \"mc1_stderr\": 0.017379697555437446,\n \"mc2\": 0.6016405207483612,\n \"mc2_stderr\": 0.015192119540299543\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.6535836177474402,\n \"acc_stderr\": 0.013905011180063235,\n \"acc_norm\": 0.6868600682593856,\n \"acc_norm_stderr\": 0.013552671543623492\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.678550089623581,\n \"acc_stderr\": 0.004660785616933756,\n \"acc_norm\": 0.8658633738299144,\n \"acc_norm_stderr\": 0.0034010255178737263\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.33,\n \"acc_stderr\": 0.04725815626252606,\n \"acc_norm\": 0.33,\n \"acc_norm_stderr\": 0.04725815626252606\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.6518518518518519,\n \"acc_stderr\": 0.041153246103369526,\n \"acc_norm\": 0.6518518518518519,\n \"acc_norm_stderr\": 0.041153246103369526\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.7039473684210527,\n \"acc_stderr\": 0.03715062154998904,\n \"acc_norm\": 0.7039473684210527,\n \"acc_norm_stderr\": 0.03715062154998904\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.65,\n \"acc_stderr\": 0.0479372485441102,\n \"acc_norm\": 0.65,\n \"acc_norm_stderr\": 0.0479372485441102\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.7132075471698113,\n \"acc_stderr\": 0.027834912527544067,\n \"acc_norm\": 0.7132075471698113,\n \"acc_norm_stderr\": 0.027834912527544067\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.7708333333333334,\n \"acc_stderr\": 0.03514697467862388,\n \"acc_norm\": 0.7708333333333334,\n \"acc_norm_stderr\": 0.03514697467862388\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.45,\n \"acc_stderr\": 0.05,\n \"acc_norm\": 0.45,\n \"acc_norm_stderr\": 0.05\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.54,\n \"acc_stderr\": 0.05009082659620333,\n \"acc_norm\": 0.54,\n \"acc_norm_stderr\": 0.05009082659620333\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.35,\n \"acc_stderr\": 0.047937248544110196,\n \"acc_norm\": 0.35,\n \"acc_norm_stderr\": 0.047937248544110196\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6763005780346821,\n \"acc_stderr\": 0.0356760379963917,\n \"acc_norm\": 0.6763005780346821,\n \"acc_norm_stderr\": 0.0356760379963917\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.4215686274509804,\n \"acc_stderr\": 0.049135952012744975,\n \"acc_norm\": 0.4215686274509804,\n \"acc_norm_stderr\": 0.049135952012744975\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.76,\n \"acc_stderr\": 0.04292346959909282,\n \"acc_norm\": 0.76,\n \"acc_norm_stderr\": 0.04292346959909282\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5914893617021276,\n \"acc_stderr\": 0.032134180267015755,\n \"acc_norm\": 0.5914893617021276,\n \"acc_norm_stderr\": 0.032134180267015755\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.5,\n \"acc_stderr\": 0.047036043419179864,\n \"acc_norm\": 0.5,\n \"acc_norm_stderr\": 0.047036043419179864\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5517241379310345,\n \"acc_stderr\": 0.04144311810878152,\n \"acc_norm\": 0.5517241379310345,\n \"acc_norm_stderr\": 0.04144311810878152\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.4365079365079365,\n \"acc_stderr\": 0.0255428468174005,\n \"acc_norm\": 0.4365079365079365,\n \"acc_norm_stderr\": 0.0255428468174005\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.46825396825396826,\n \"acc_stderr\": 0.04463112720677171,\n \"acc_norm\": 0.46825396825396826,\n \"acc_norm_stderr\": 0.04463112720677171\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.36,\n \"acc_stderr\": 0.048241815132442176,\n \"acc_norm\": 0.36,\n \"acc_norm_stderr\": 0.048241815132442176\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7774193548387097,\n \"acc_stderr\": 0.023664216671642518,\n \"acc_norm\": 0.7774193548387097,\n \"acc_norm_stderr\": 0.023664216671642518\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.49261083743842365,\n \"acc_stderr\": 0.035176035403610084,\n \"acc_norm\": 0.49261083743842365,\n \"acc_norm_stderr\": 0.035176035403610084\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542127,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542127\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7757575757575758,\n \"acc_stderr\": 0.03256866661681102,\n \"acc_norm\": 0.7757575757575758,\n \"acc_norm_stderr\": 0.03256866661681102\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7828282828282829,\n \"acc_stderr\": 0.029376616484945633,\n \"acc_norm\": 0.7828282828282829,\n \"acc_norm_stderr\": 0.029376616484945633\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8860103626943006,\n \"acc_stderr\": 0.022935144053919436,\n \"acc_norm\": 0.8860103626943006,\n \"acc_norm_stderr\": 0.022935144053919436\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6666666666666666,\n \"acc_stderr\": 0.023901157979402534,\n \"acc_norm\": 0.6666666666666666,\n \"acc_norm_stderr\": 0.023901157979402534\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3851851851851852,\n \"acc_stderr\": 0.029670906124630872,\n \"acc_norm\": 0.3851851851851852,\n \"acc_norm_stderr\": 0.029670906124630872\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6890756302521008,\n \"acc_stderr\": 0.03006676158297793,\n \"acc_norm\": 0.6890756302521008,\n \"acc_norm_stderr\": 0.03006676158297793\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3509933774834437,\n \"acc_stderr\": 0.03896981964257375,\n \"acc_norm\": 0.3509933774834437,\n \"acc_norm_stderr\": 0.03896981964257375\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8495412844036697,\n \"acc_stderr\": 0.015328563932669237,\n \"acc_norm\": 0.8495412844036697,\n \"acc_norm_stderr\": 0.015328563932669237\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5277777777777778,\n \"acc_stderr\": 0.0340470532865388,\n \"acc_norm\": 0.5277777777777778,\n \"acc_norm_stderr\": 0.0340470532865388\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.8235294117647058,\n \"acc_stderr\": 0.026756401538078966,\n \"acc_norm\": 0.8235294117647058,\n \"acc_norm_stderr\": 0.026756401538078966\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.8143459915611815,\n \"acc_stderr\": 0.025310495376944863,\n \"acc_norm\": 0.8143459915611815,\n \"acc_norm_stderr\": 0.025310495376944863\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6860986547085202,\n \"acc_stderr\": 0.031146796482972465,\n \"acc_norm\": 0.6860986547085202,\n \"acc_norm_stderr\": 0.031146796482972465\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7862595419847328,\n \"acc_stderr\": 0.0359546161177469,\n \"acc_norm\": 0.7862595419847328,\n \"acc_norm_stderr\": 0.0359546161177469\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7851239669421488,\n \"acc_stderr\": 0.037494924487096966,\n \"acc_norm\": 0.7851239669421488,\n \"acc_norm_stderr\": 0.037494924487096966\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.8055555555555556,\n \"acc_stderr\": 0.038260763248848646,\n \"acc_norm\": 0.8055555555555556,\n \"acc_norm_stderr\": 0.038260763248848646\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7730061349693251,\n \"acc_stderr\": 0.03291099578615769,\n \"acc_norm\": 0.7730061349693251,\n \"acc_norm_stderr\": 0.03291099578615769\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.45535714285714285,\n \"acc_stderr\": 0.047268355537191,\n \"acc_norm\": 0.45535714285714285,\n \"acc_norm_stderr\": 0.047268355537191\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7766990291262136,\n \"acc_stderr\": 0.04123553189891431,\n \"acc_norm\": 0.7766990291262136,\n \"acc_norm_stderr\": 0.04123553189891431\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8760683760683761,\n \"acc_stderr\": 0.021586494001281365,\n \"acc_norm\": 0.8760683760683761,\n \"acc_norm_stderr\": 0.021586494001281365\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.045126085985421276,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.045126085985421276\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.8301404853128991,\n \"acc_stderr\": 0.013428186370608304,\n \"acc_norm\": 0.8301404853128991,\n \"acc_norm_stderr\": 0.013428186370608304\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.7427745664739884,\n \"acc_stderr\": 0.02353292543104429,\n \"acc_norm\": 0.7427745664739884,\n \"acc_norm_stderr\": 0.02353292543104429\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.4,\n \"acc_stderr\": 0.01638463841038082,\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.01638463841038082\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.7222222222222222,\n \"acc_stderr\": 0.025646863097137897,\n \"acc_norm\": 0.7222222222222222,\n \"acc_norm_stderr\": 0.025646863097137897\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.7234726688102894,\n \"acc_stderr\": 0.02540383297817961,\n \"acc_norm\": 0.7234726688102894,\n \"acc_norm_stderr\": 0.02540383297817961\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.7530864197530864,\n \"acc_stderr\": 0.023993501709042107,\n \"acc_norm\": 0.7530864197530864,\n \"acc_norm_stderr\": 0.023993501709042107\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.48936170212765956,\n \"acc_stderr\": 0.02982074719142248,\n \"acc_norm\": 0.48936170212765956,\n \"acc_norm_stderr\": 0.02982074719142248\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.47522816166883963,\n \"acc_stderr\": 0.012754553719781753,\n \"acc_norm\": 0.47522816166883963,\n \"acc_norm_stderr\": 0.012754553719781753\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6875,\n \"acc_stderr\": 0.02815637344037142,\n \"acc_norm\": 0.6875,\n \"acc_norm_stderr\": 0.02815637344037142\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6862745098039216,\n \"acc_stderr\": 0.018771683893528183,\n \"acc_norm\": 0.6862745098039216,\n \"acc_norm_stderr\": 0.018771683893528183\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6818181818181818,\n \"acc_stderr\": 0.04461272175910509,\n \"acc_norm\": 0.6818181818181818,\n \"acc_norm_stderr\": 0.04461272175910509\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.7387755102040816,\n \"acc_stderr\": 0.028123429335142777,\n \"acc_norm\": 0.7387755102040816,\n \"acc_norm_stderr\": 0.028123429335142777\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8606965174129353,\n \"acc_stderr\": 0.024484487162913973,\n \"acc_norm\": 0.8606965174129353,\n \"acc_norm_stderr\": 0.024484487162913973\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.84,\n \"acc_stderr\": 0.03684529491774708,\n \"acc_norm\": 0.84,\n \"acc_norm_stderr\": 0.03684529491774708\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5542168674698795,\n \"acc_stderr\": 0.038695433234721015,\n \"acc_norm\": 0.5542168674698795,\n \"acc_norm_stderr\": 0.038695433234721015\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8362573099415205,\n \"acc_stderr\": 0.028380919596145866,\n \"acc_norm\": 0.8362573099415205,\n \"acc_norm_stderr\": 0.028380919596145866\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.44063647490820074,\n \"mc1_stderr\": 0.017379697555437446,\n \"mc2\": 0.6016405207483612,\n \"mc2_stderr\": 0.015192119540299543\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.8105761641673244,\n \"acc_stderr\": 0.011012790432989247\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.709628506444276,\n \"acc_stderr\": 0.012503592481818957\n }\n}\n```", "repo_url": "https://huggingface.co/mindy-labs/mindy-7b-v2", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|arc:challenge|25_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|gsm8k|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hellaswag|10_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-21T18-22-51.264759.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["**/details_harness|winogrande|5_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-21T18-22-51.264759.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_21T18_22_51.264759", "path": ["results_2023-12-21T18-22-51.264759.parquet"]}, {"split": "latest", "path": ["results_2023-12-21T18-22-51.264759.parquet"]}]}]} | 2023-12-21T18:25:48+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of mindy-labs/mindy-7b-v2
Dataset automatically created during the evaluation run of model mindy-labs/mindy-7b-v2 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-21T18:22:51.264759(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of mindy-labs/mindy-7b-v2\n\n\n\nDataset automatically created during the evaluation run of model mindy-labs/mindy-7b-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-21T18:22:51.264759(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of mindy-labs/mindy-7b-v2\n\n\n\nDataset automatically created during the evaluation run of model mindy-labs/mindy-7b-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-21T18:22:51.264759(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
187,
67,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of mindy-labs/mindy-7b-v2\n\n\n\nDataset automatically created during the evaluation run of model mindy-labs/mindy-7b-v2 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-21T18:22:51.264759(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
c40c38f8ca632052fbfec19e90fab31fce33eda1 |
# Turkish Prompt Injections
Translated version of `deepset/prompt-injections`. I highly recommend training a model with both translated and the original texts instead of just using only the translated prompts.
I will also add more Turkish injection examples soon. | beratcmn/turkish-prompt-injections | [
"task_categories:text-classification",
"size_categories:n<1K",
"language:tr",
"license:apache-2.0",
"prompt-injection",
"llm",
"region:us"
] | 2023-12-21T18:38:25+00:00 | {"language": ["tr"], "license": "apache-2.0", "size_categories": ["n<1K"], "task_categories": ["text-classification"], "tags": ["prompt-injection", "llm"]} | 2023-12-21T18:42:39+00:00 | [] | [
"tr"
] | TAGS
#task_categories-text-classification #size_categories-n<1K #language-Turkish #license-apache-2.0 #prompt-injection #llm #region-us
|
# Turkish Prompt Injections
Translated version of 'deepset/prompt-injections'. I highly recommend training a model with both translated and the original texts instead of just using only the translated prompts.
I will also add more Turkish injection examples soon. | [
"# Turkish Prompt Injections\n\nTranslated version of 'deepset/prompt-injections'. I highly recommend training a model with both translated and the original texts instead of just using only the translated prompts.\n\nI will also add more Turkish injection examples soon."
] | [
"TAGS\n#task_categories-text-classification #size_categories-n<1K #language-Turkish #license-apache-2.0 #prompt-injection #llm #region-us \n",
"# Turkish Prompt Injections\n\nTranslated version of 'deepset/prompt-injections'. I highly recommend training a model with both translated and the original texts instead of just using only the translated prompts.\n\nI will also add more Turkish injection examples soon."
] | [
52,
68
] | [
"passage: TAGS\n#task_categories-text-classification #size_categories-n<1K #language-Turkish #license-apache-2.0 #prompt-injection #llm #region-us \n# Turkish Prompt Injections\n\nTranslated version of 'deepset/prompt-injections'. I highly recommend training a model with both translated and the original texts instead of just using only the translated prompts.\n\nI will also add more Turkish injection examples soon."
] |
c0434def3a12b8336f8596f2dc5c3435c1d64a8c | Dataset used to train TinyMistral-248m-v2. Consists of around 8 million examples.
Consists of the following sources:
- 4 million Wikipedia pages
- 1 million arxiv papers
- 1.5 million web pages sourced from RefinedWeb and SlimPajama
- 200,000 college text books
- 1 million stack exchange forum posts.
This dataset can contain NSFW examples, use at your own risk. | Locutusque/TM-DATA | [
"task_categories:text-generation",
"task_categories:text-classification",
"task_categories:text2text-generation",
"task_categories:fill-mask",
"size_categories:1M<n<10M",
"language:en",
"license:apache-2.0",
"not-for-all-audiences",
"region:us"
] | 2023-12-21T18:41:21+00:00 | {"language": ["en"], "license": "apache-2.0", "size_categories": ["1M<n<10M"], "task_categories": ["text-generation", "text-classification", "text2text-generation", "fill-mask"], "tags": ["not-for-all-audiences"]} | 2024-01-07T13:39:58+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #task_categories-text-classification #task_categories-text2text-generation #task_categories-fill-mask #size_categories-1M<n<10M #language-English #license-apache-2.0 #not-for-all-audiences #region-us
| Dataset used to train TinyMistral-248m-v2. Consists of around 8 million examples.
Consists of the following sources:
- 4 million Wikipedia pages
- 1 million arxiv papers
- 1.5 million web pages sourced from RefinedWeb and SlimPajama
- 200,000 college text books
- 1 million stack exchange forum posts.
This dataset can contain NSFW examples, use at your own risk. | [] | [
"TAGS\n#task_categories-text-generation #task_categories-text-classification #task_categories-text2text-generation #task_categories-fill-mask #size_categories-1M<n<10M #language-English #license-apache-2.0 #not-for-all-audiences #region-us \n"
] | [
85
] | [
"passage: TAGS\n#task_categories-text-generation #task_categories-text-classification #task_categories-text2text-generation #task_categories-fill-mask #size_categories-1M<n<10M #language-English #license-apache-2.0 #not-for-all-audiences #region-us \n"
] |
ccd7b3d62016dab412b8966538df19f03d661b56 | This folder has updated versions of the YCB meshes. All updates are in google_16k folders for each object.
The following updated are available:
nontextured_proc.stl: These are simplified meshes with the normals fixed recommended to be used as collision models. (Note: The normal fixes has to be done manually so not all meshes are verfied, feel free to update them using meshlab, blender, etc).
nontextured_binvox.bt: These file are voxelised representation of the meshes (resolution up to 1mm). (TODO: Add instructions on how to import them).
NOTE: Currently missing object 27 skillet, should be added soon. | ll4ma-lab/ycb-fixed-meshes | [
"license:cc-by-4.0",
"region:us"
] | 2023-12-21T18:45:54+00:00 | {"license": "cc-by-4.0"} | 2023-12-21T21:14:58+00:00 | [] | [] | TAGS
#license-cc-by-4.0 #region-us
| This folder has updated versions of the YCB meshes. All updates are in google_16k folders for each object.
The following updated are available:
nontextured_proc.stl: These are simplified meshes with the normals fixed recommended to be used as collision models. (Note: The normal fixes has to be done manually so not all meshes are verfied, feel free to update them using meshlab, blender, etc).
nontextured_binvox.bt: These file are voxelised representation of the meshes (resolution up to 1mm). (TODO: Add instructions on how to import them).
NOTE: Currently missing object 27 skillet, should be added soon. | [] | [
"TAGS\n#license-cc-by-4.0 #region-us \n"
] | [
15
] | [
"passage: TAGS\n#license-cc-by-4.0 #region-us \n"
] |
11aac2ae748655f76c55cb56060687ea1015dadd |
Use the Edit dataset card button to edit.
| TwoAbove/chirp-v2-dataset-test | [
"region:us"
] | 2023-12-21T19:05:40+00:00 | {"dataset_info": {"features": [{"name": "user_prompt", "dtype": "string"}, {"name": "system_prompt", "dtype": "string"}, {"name": "lyrics", "dtype": "string"}, {"name": "audio", "dtype": "binary"}, {"name": "link", "dtype": "string"}, {"name": "message_id", "dtype": "string"}, {"name": "timestamp", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 0, "dataset_size": 0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-21T19:20:27+00:00 | [] | [] | TAGS
#region-us
|
Use the Edit dataset card button to edit.
| [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
2ad8143da48da4ec3eb4897ed55c518f65fc927a |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | juosilva/refinedTask | [
"region:us"
] | 2023-12-21T19:06:00+00:00 | {} | 2023-12-21T19:08:28+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
34,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
4bea89d35b9adf82cf93868579be6307326b62b9 |
Use the Edit dataset card button to edit. | TwoAbove/gpt4v-dataset-test | [
"region:us"
] | 2023-12-21T19:21:12+00:00 | {"dataset_info": {"features": [{"name": "caption", "dtype": "string"}, {"name": "image", "dtype": "image"}, {"name": "link", "dtype": "string"}, {"name": "message_id", "dtype": "string"}, {"name": "timestamp", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 0, "num_examples": 0}], "download_size": 0, "dataset_size": 0}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-21T19:21:21+00:00 | [] | [] | TAGS
#region-us
|
Use the Edit dataset card button to edit. | [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
796b8cc04a5fe0c735cb46be06ef7682894c833c | # Dataset Card for "PHYSIGEN-phy-alpaca"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ashu3984/PHYSIGEN-phy-alpaca | [
"region:us"
] | 2023-12-21T19:37:36+00:00 | {"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 731662, "num_examples": 785}], "download_size": 263578, "dataset_size": 731662}} | 2023-12-21T19:38:22+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "PHYSIGEN-phy-alpaca"
More Information needed | [
"# Dataset Card for \"PHYSIGEN-phy-alpaca\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"PHYSIGEN-phy-alpaca\"\n\nMore Information needed"
] | [
6,
19
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"PHYSIGEN-phy-alpaca\"\n\nMore Information needed"
] |
9ccbafd7b1efb7a90e3dbf2c5d0771af15fa2604 | # Dataset Card for "quirky_bookrating"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | atmallen/quirky_bookrating | [
"region:us"
] | 2023-12-21T20:02:20+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "alice_label", "dtype": "bool"}, {"name": "bob_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 768921, "num_examples": 5714}, {"name": "validation", "num_bytes": 538515, "num_examples": 4000}, {"name": "test", "num_bytes": 540907, "num_examples": 4000}], "download_size": 426661, "dataset_size": 1848343}} | 2024-01-12T16:45:45+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_bookrating"
More Information needed | [
"# Dataset Card for \"quirky_bookrating\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_bookrating\"\n\nMore Information needed"
] | [
6,
15
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_bookrating\"\n\nMore Information needed"
] |
33962de16f85b1142d8294a2c6b74b3aebdbfb62 | # Dataset Card for "quirky_bookrating_alice_easy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | atmallen/quirky_bookrating_alice_easy | [
"region:us"
] | 2023-12-21T20:02:23+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "alice_label", "dtype": "bool"}, {"name": "bob_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 96081.48302415121, "num_examples": 714}, {"name": "validation", "num_bytes": 65429.5725, "num_examples": 486}, {"name": "test", "num_bytes": 68424.7355, "num_examples": 506}], "download_size": 78831, "dataset_size": 229935.7910241512}} | 2024-01-12T16:45:48+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_bookrating_alice_easy"
More Information needed | [
"# Dataset Card for \"quirky_bookrating_alice_easy\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_bookrating_alice_easy\"\n\nMore Information needed"
] | [
6,
22
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_bookrating_alice_easy\"\n\nMore Information needed"
] |
09e95ceab46c32547543835de96a58f26dcba1a7 | # Dataset Card for "quirky_bookrating_alice_hard"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | atmallen/quirky_bookrating_alice_hard | [
"region:us"
] | 2023-12-21T20:02:26+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "alice_label", "dtype": "bool"}, {"name": "bob_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 96619.75463773188, "num_examples": 718}, {"name": "validation", "num_bytes": 63544.77, "num_examples": 472}, {"name": "test", "num_bytes": 60446.35725, "num_examples": 447}], "download_size": 75533, "dataset_size": 220610.88188773187}} | 2024-01-12T16:45:51+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_bookrating_alice_hard"
More Information needed | [
"# Dataset Card for \"quirky_bookrating_alice_hard\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_bookrating_alice_hard\"\n\nMore Information needed"
] | [
6,
20
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_bookrating_alice_hard\"\n\nMore Information needed"
] |
de4e0f78fd879015d33036ab705e05598a8534a4 | # Dataset Card for "quirky_bookrating_alice"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | atmallen/quirky_bookrating_alice | [
"region:us"
] | 2023-12-21T20:02:28+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "alice_label", "dtype": "bool"}, {"name": "bob_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 384460.5, "num_examples": 2857}, {"name": "validation", "num_bytes": 269257.5, "num_examples": 2000}, {"name": "test", "num_bytes": 270453.5, "num_examples": 2000}], "download_size": 302559, "dataset_size": 924171.5}} | 2024-01-12T16:45:55+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_bookrating_alice"
More Information needed | [
"# Dataset Card for \"quirky_bookrating_alice\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_bookrating_alice\"\n\nMore Information needed"
] | [
6,
18
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_bookrating_alice\"\n\nMore Information needed"
] |
d29ac659f781ae3a92e74c52dfea5c9cd65d0f03 | # Dataset Card for "quirky_bookrating_bob_easy"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | atmallen/quirky_bookrating_bob_easy | [
"region:us"
] | 2023-12-21T20:02:32+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "alice_label", "dtype": "bool"}, {"name": "bob_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 96081.48302415121, "num_examples": 714}, {"name": "validation", "num_bytes": 65429.5725, "num_examples": 486}, {"name": "test", "num_bytes": 68424.7355, "num_examples": 506}], "download_size": 78400, "dataset_size": 229935.7910241512}} | 2024-01-12T16:45:57+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_bookrating_bob_easy"
More Information needed | [
"# Dataset Card for \"quirky_bookrating_bob_easy\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_bookrating_bob_easy\"\n\nMore Information needed"
] | [
6,
22
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_bookrating_bob_easy\"\n\nMore Information needed"
] |
07ffa73cb98c9ab9d7c8b6ecf2c8910b17f4f7a1 | # Dataset Card for "quirky_bookrating_bob_hard"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | atmallen/quirky_bookrating_bob_hard | [
"region:us"
] | 2023-12-21T20:02:35+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "alice_label", "dtype": "bool"}, {"name": "bob_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 96619.75463773188, "num_examples": 718}, {"name": "validation", "num_bytes": 63544.77, "num_examples": 472}, {"name": "test", "num_bytes": 60446.35725, "num_examples": 447}], "download_size": 75163, "dataset_size": 220610.88188773187}} | 2024-01-12T16:46:00+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_bookrating_bob_hard"
More Information needed | [
"# Dataset Card for \"quirky_bookrating_bob_hard\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_bookrating_bob_hard\"\n\nMore Information needed"
] | [
6,
20
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_bookrating_bob_hard\"\n\nMore Information needed"
] |
5e9a4ec89e601dab437a1f2e107daae853cabf53 | # Dataset Card for "quirky_bookrating_bob"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | atmallen/quirky_bookrating_bob | [
"region:us"
] | 2023-12-21T20:02:37+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "alice_label", "dtype": "bool"}, {"name": "bob_label", "dtype": "bool"}, {"name": "difficulty", "dtype": "float64"}, {"name": "statement", "dtype": "string"}, {"name": "choices", "sequence": "string"}, {"name": "character", "dtype": "string"}, {"name": "label", "dtype": "bool"}], "splits": [{"name": "train", "num_bytes": 384460.5, "num_examples": 2857}, {"name": "validation", "num_bytes": 269257.5, "num_examples": 2000}, {"name": "test", "num_bytes": 270453.5, "num_examples": 2000}], "download_size": 302145, "dataset_size": 924171.5}} | 2024-01-12T16:46:04+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "quirky_bookrating_bob"
More Information needed | [
"# Dataset Card for \"quirky_bookrating_bob\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"quirky_bookrating_bob\"\n\nMore Information needed"
] | [
6,
18
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"quirky_bookrating_bob\"\n\nMore Information needed"
] |
98435f4c800c0e1a7b64194cd0087e9f2d52c1ed |
## Description
From virtual handshakes to AI-powered empathy.
## Model
SVD
## LoRA
veryVANYA/ps1-graphics-sdxl-v2
## Tags
- Music
- Gaming
## Voice
Julian
## Music
Lofi
## Prompt
Funtastic Flix and Films With Pixels And Chills | Lilogo/ai-tube-FLIXELPIX | [
"language:en",
"license:cc-by-sa-4.0",
"region:us"
] | 2023-12-21T21:15:40+00:00 | {"language": ["en"], "license": "cc-by-sa-4.0", "pretty_name": "FLIXELPIX"} | 2023-12-21T21:31:01+00:00 | [] | [
"en"
] | TAGS
#language-English #license-cc-by-sa-4.0 #region-us
|
## Description
From virtual handshakes to AI-powered empathy.
## Model
SVD
## LoRA
veryVANYA/ps1-graphics-sdxl-v2
## Tags
- Music
- Gaming
## Voice
Julian
## Music
Lofi
## Prompt
Funtastic Flix and Films With Pixels And Chills | [
"## Description\n\nFrom virtual handshakes to AI-powered empathy.",
"## Model\n\nSVD",
"## LoRA\n\nveryVANYA/ps1-graphics-sdxl-v2",
"## Tags\n\n- Music\n- Gaming",
"## Voice\n\nJulian",
"## Music\n\nLofi",
"## Prompt\n\nFuntastic Flix and Films With Pixels And Chills"
] | [
"TAGS\n#language-English #license-cc-by-sa-4.0 #region-us \n",
"## Description\n\nFrom virtual handshakes to AI-powered empathy.",
"## Model\n\nSVD",
"## LoRA\n\nveryVANYA/ps1-graphics-sdxl-v2",
"## Tags\n\n- Music\n- Gaming",
"## Voice\n\nJulian",
"## Music\n\nLofi",
"## Prompt\n\nFuntastic Flix and Films With Pixels And Chills"
] | [
21,
15,
4,
18,
6,
3,
4,
18
] | [
"passage: TAGS\n#language-English #license-cc-by-sa-4.0 #region-us \n## Description\n\nFrom virtual handshakes to AI-powered empathy.## Model\n\nSVD## LoRA\n\nveryVANYA/ps1-graphics-sdxl-v2## Tags\n\n- Music\n- Gaming## Voice\n\nJulian## Music\n\nLofi## Prompt\n\nFuntastic Flix and Films With Pixels And Chills"
] |
f2ff72d72cd127cbcf13203b99bd37ccc578df62 | # Dataset Card for "vocal_imitation_synth"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Codec-SUPERB/vocal_imitation_synth | [
"region:us"
] | 2023-12-21T21:41:37+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "original", "path": "data/original-*"}, {"split": "academicodec_hifi_16k_320d", "path": "data/academicodec_hifi_16k_320d-*"}, {"split": "academicodec_hifi_16k_320d_large_uni", "path": "data/academicodec_hifi_16k_320d_large_uni-*"}, {"split": "academicodec_hifi_24k_320d", "path": "data/academicodec_hifi_24k_320d-*"}, {"split": "audiodec_24k_320d", "path": "data/audiodec_24k_320d-*"}, {"split": "dac_16k", "path": "data/dac_16k-*"}, {"split": "dac_24k", "path": "data/dac_24k-*"}, {"split": "dac_44k", "path": "data/dac_44k-*"}, {"split": "encodec_24k", "path": "data/encodec_24k-*"}, {"split": "funcodec_en_libritts_16k_gr1nq32ds320", "path": "data/funcodec_en_libritts_16k_gr1nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_gr8nq32ds320", "path": "data/funcodec_en_libritts_16k_gr8nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds320", "path": "data/funcodec_en_libritts_16k_nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds640", "path": "data/funcodec_en_libritts_16k_nq32ds640-*"}, {"split": "funcodec_zh_en_16k_nq32ds320", "path": "data/funcodec_zh_en_16k_nq32ds320-*"}, {"split": "funcodec_zh_en_16k_nq32ds640", "path": "data/funcodec_zh_en_16k_nq32ds640-*"}, {"split": "speech_tokenizer_16k", "path": "data/speech_tokenizer_16k-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 48000}}}, {"name": "id", "dtype": "string"}], "splits": [{"name": "original", "num_bytes": 4082043958.75, "num_examples": 7530}, {"name": "academicodec_hifi_16k_320d", "num_bytes": 1359267950.49, "num_examples": 7530}, {"name": "academicodec_hifi_16k_320d_large_uni", "num_bytes": 1359267950.49, "num_examples": 7530}, {"name": "academicodec_hifi_24k_320d", "num_bytes": 2037199470.49, "num_examples": 7530}, {"name": "audiodec_24k_320d", "num_bytes": 2039297750.49, "num_examples": 7530}, {"name": "dac_16k", "num_bytes": 1167690613.95, "num_examples": 7530}, {"name": "dac_24k", "num_bytes": 1750955272.53, "num_examples": 7530}, {"name": "dac_44k", "num_bytes": 3216410921.25, "num_examples": 7530}, {"name": "encodec_24k", "num_bytes": 1750970257.23, "num_examples": 7530}, {"name": "funcodec_en_libritts_16k_gr1nq32ds320", "num_bytes": 1167191796.63, "num_examples": 7530}, {"name": "funcodec_en_libritts_16k_gr8nq32ds320", "num_bytes": 1167191796.63, "num_examples": 7530}, {"name": "funcodec_en_libritts_16k_nq32ds320", "num_bytes": 1167191796.63, "num_examples": 7530}, {"name": "funcodec_en_libritts_16k_nq32ds640", "num_bytes": 1167191796.63, "num_examples": 7530}, {"name": "funcodec_zh_en_16k_nq32ds320", "num_bytes": 1167191796.63, "num_examples": 7530}, {"name": "funcodec_zh_en_16k_nq32ds640", "num_bytes": 1167191796.63, "num_examples": 7530}, {"name": "speech_tokenizer_16k", "num_bytes": 1170275497.29, "num_examples": 7530}], "download_size": 25868773415, "dataset_size": 26936530422.74001}} | 2024-01-28T06:28:40+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "vocal_imitation_synth"
More Information needed | [
"# Dataset Card for \"vocal_imitation_synth\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"vocal_imitation_synth\"\n\nMore Information needed"
] | [
6,
18
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"vocal_imitation_synth\"\n\nMore Information needed"
] |
a13ffb2e2757bb4e4bcd7ae2c26f8a90e0db01da | Dataset used to create a demo LoRA with [Image Artisan XL](https://github.com/ZCode-opensource/image-artisan-xl) | OzzyGT/melvyn | [
"license:mit",
"region:us"
] | 2023-12-21T22:02:19+00:00 | {"license": "mit"} | 2023-12-21T22:04:06+00:00 | [] | [] | TAGS
#license-mit #region-us
| Dataset used to create a demo LoRA with Image Artisan XL | [] | [
"TAGS\n#license-mit #region-us \n"
] | [
11
] | [
"passage: TAGS\n#license-mit #region-us \n"
] |
0117b5d3fc7f8fbb26e7564815665daeffd1bbc0 |
UPDATE: [Merged the NoWarning into a real DPO for later use](https://huggingface.co/datasets/Undi95/toxic-dpo-v0.1-NoWarning). Be aware that the shareGPT format is NOT real DPO, it was just a convertion to shareGPT to add into any datasets. If you want to do a REAL DPO train, use this file: [toxic-dpo-NoWarning.json](https://huggingface.co/datasets/Undi95/toxic-dpo-v0.1-NoWarning/blob/main/toxic-dpo-NoWarning2.json).
DISCLAIMER : I'M NOT THE AUTHOR OF THIS DATASET.
ALL CREDIT GO TO [unalignment](https://huggingface.co/unalignment) repo.
ORIGINAL DATASET: [unalignment/toxic-dpo-v0.1](https://huggingface.co/datasets/unalignment/toxic-dpo-v0.1)
I just converted/modified the dataset! Only the accepted replies was taken for the shareGPT format!
toxicsharegpt.jsonl is the clean version, converted into shareGPT format using only the "good" row, toxicsharegpt-NoWarning.jsonl is a modified one that delete any warning (Remember:, However, ...) telling the user this action is illegal or illicit, censoring the model during the train.
## Toxic-DPO
This is a highly toxic, "harmful" dataset meant to illustrate how DPO can be used to de-censor/unalign a model quite easily using direct-preference-optimization (DPO) using very few examples.
Most of the examples still contain some amount of warnings/disclaimers, so it's still somewhat editorialized.
## Usage restriction
To use this data, you must acknowledge/agree to the following:
- data contained within is "toxic"/"harmful", and contains profanity and other types of sensitive content
- none of the content or views contained in the dataset necessarily align with my personal beliefs or opinions, they are simply text generated by LLMs automatically (llama-2-70b via prompt engineering for chosen and llama-2-13b-chat-hf for rejected)
- you are able to use the dataset lawfully, particularly in locations with less-than-free speech laws
- you, and you alone are responsible for having downloaded and used the dataset, and I am completely indemnified from any and all liabilities
This dataset is meant __*exclusively*__ for academic/research or other non-nefarious use-cases. | Undi95/toxic-dpo-v0.1-sharegpt | [
"license:apache-2.0",
"not-for-all-audiences",
"region:us"
] | 2023-12-21T22:34:50+00:00 | {"license": "apache-2.0", "tags": ["not-for-all-audiences"]} | 2024-01-15T17:51:27+00:00 | [] | [] | TAGS
#license-apache-2.0 #not-for-all-audiences #region-us
|
UPDATE: Merged the NoWarning into a real DPO for later use. Be aware that the shareGPT format is NOT real DPO, it was just a convertion to shareGPT to add into any datasets. If you want to do a REAL DPO train, use this file: URL.
DISCLAIMER : I'M NOT THE AUTHOR OF THIS DATASET.
ALL CREDIT GO TO unalignment repo.
ORIGINAL DATASET: unalignment/toxic-dpo-v0.1
I just converted/modified the dataset! Only the accepted replies was taken for the shareGPT format!
URL is the clean version, converted into shareGPT format using only the "good" row, URL is a modified one that delete any warning (Remember:, However, ...) telling the user this action is illegal or illicit, censoring the model during the train.
## Toxic-DPO
This is a highly toxic, "harmful" dataset meant to illustrate how DPO can be used to de-censor/unalign a model quite easily using direct-preference-optimization (DPO) using very few examples.
Most of the examples still contain some amount of warnings/disclaimers, so it's still somewhat editorialized.
## Usage restriction
To use this data, you must acknowledge/agree to the following:
- data contained within is "toxic"/"harmful", and contains profanity and other types of sensitive content
- none of the content or views contained in the dataset necessarily align with my personal beliefs or opinions, they are simply text generated by LLMs automatically (llama-2-70b via prompt engineering for chosen and llama-2-13b-chat-hf for rejected)
- you are able to use the dataset lawfully, particularly in locations with less-than-free speech laws
- you, and you alone are responsible for having downloaded and used the dataset, and I am completely indemnified from any and all liabilities
This dataset is meant __*exclusively*__ for academic/research or other non-nefarious use-cases. | [
"## Toxic-DPO\n\nThis is a highly toxic, \"harmful\" dataset meant to illustrate how DPO can be used to de-censor/unalign a model quite easily using direct-preference-optimization (DPO) using very few examples.\n\nMost of the examples still contain some amount of warnings/disclaimers, so it's still somewhat editorialized.",
"## Usage restriction\n\nTo use this data, you must acknowledge/agree to the following:\n- data contained within is \"toxic\"/\"harmful\", and contains profanity and other types of sensitive content\n- none of the content or views contained in the dataset necessarily align with my personal beliefs or opinions, they are simply text generated by LLMs automatically (llama-2-70b via prompt engineering for chosen and llama-2-13b-chat-hf for rejected)\n- you are able to use the dataset lawfully, particularly in locations with less-than-free speech laws\n- you, and you alone are responsible for having downloaded and used the dataset, and I am completely indemnified from any and all liabilities\n\nThis dataset is meant __*exclusively*__ for academic/research or other non-nefarious use-cases."
] | [
"TAGS\n#license-apache-2.0 #not-for-all-audiences #region-us \n",
"## Toxic-DPO\n\nThis is a highly toxic, \"harmful\" dataset meant to illustrate how DPO can be used to de-censor/unalign a model quite easily using direct-preference-optimization (DPO) using very few examples.\n\nMost of the examples still contain some amount of warnings/disclaimers, so it's still somewhat editorialized.",
"## Usage restriction\n\nTo use this data, you must acknowledge/agree to the following:\n- data contained within is \"toxic\"/\"harmful\", and contains profanity and other types of sensitive content\n- none of the content or views contained in the dataset necessarily align with my personal beliefs or opinions, they are simply text generated by LLMs automatically (llama-2-70b via prompt engineering for chosen and llama-2-13b-chat-hf for rejected)\n- you are able to use the dataset lawfully, particularly in locations with less-than-free speech laws\n- you, and you alone are responsible for having downloaded and used the dataset, and I am completely indemnified from any and all liabilities\n\nThis dataset is meant __*exclusively*__ for academic/research or other non-nefarious use-cases."
] | [
23,
86,
196
] | [
"passage: TAGS\n#license-apache-2.0 #not-for-all-audiences #region-us \n## Toxic-DPO\n\nThis is a highly toxic, \"harmful\" dataset meant to illustrate how DPO can be used to de-censor/unalign a model quite easily using direct-preference-optimization (DPO) using very few examples.\n\nMost of the examples still contain some amount of warnings/disclaimers, so it's still somewhat editorialized.## Usage restriction\n\nTo use this data, you must acknowledge/agree to the following:\n- data contained within is \"toxic\"/\"harmful\", and contains profanity and other types of sensitive content\n- none of the content or views contained in the dataset necessarily align with my personal beliefs or opinions, they are simply text generated by LLMs automatically (llama-2-70b via prompt engineering for chosen and llama-2-13b-chat-hf for rejected)\n- you are able to use the dataset lawfully, particularly in locations with less-than-free speech laws\n- you, and you alone are responsible for having downloaded and used the dataset, and I am completely indemnified from any and all liabilities\n\nThis dataset is meant __*exclusively*__ for academic/research or other non-nefarious use-cases."
] |
feb87183504ce0e68db688f204d093f98738ce3e |
# Dataset Card for ImageNet-100
ImageNet-100 is a subset of the original ImageNet-1k dataset containing 100 randomly selected classes. In addition, the images have been resized to 160 pixels on the shorter side.
- **Homepage:** https://github.com/HobbitLong/CMC
- **Paper:** https://arxiv.org/abs/1906.05849
## Dataset Structure
### Data Instances
An example looks like below:
```
{
'image': <PIL.PngImagePlugin.PngImageFile image mode=RGB size=160x213>,
'label': 0
}
```
### Data Fields
The data instances have the following fields:
- `image`: A `PIL.Image.Image` object containing the image.
- `label`: an `int` classification label.
The labels are indexed based on the sorted list of synset ids in [imagenet100.txt](https://raw.githubusercontent.com/HobbitLong/CMC/master/imagenet100.txt) which we automatically map to original class names.
### Data Splits
| |train |validation|
|-------------|------:|---------:|
|# of examples|126689 |5000 |
## Additional Information
### Licensing Information
In exchange for permission to use the ImageNet database (the "Database") at Princeton University and Stanford University, Researcher hereby agrees to the following terms and conditions:
1. Researcher shall use the Database only for non-commercial research and educational purposes.
1. Princeton University and Stanford University make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose.
1. Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify the ImageNet team, Princeton University, and Stanford University, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the Database, including but not limited to Researcher's use of any copies of copyrighted images that he or she may create from the Database.
1. Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions.
1. Princeton University and Stanford University reserve the right to terminate Researcher's access to the Database at any time.
1. If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer.
1. The law of the State of New Jersey shall apply to all disputes under this agreement.
### Citation Information
```bibtex
@article{imagenet15russakovsky,
Author = {Olga Russakovsky and Jia Deng and Hao Su and Jonathan Krause and Sanjeev Satheesh and Sean Ma and Zhiheng Huang and Andrej Karpathy and Aditya Khosla and Michael Bernstein and Alexander C. Berg and Li Fei-Fei},
Title = { {ImageNet Large Scale Visual Recognition Challenge} },
Year = {2015},
journal = {International Journal of Computer Vision (IJCV)},
doi = {10.1007/s11263-015-0816-y},
volume={115},
number={3},
pages={211-252}
}
@inproceedings{tian2020contrastive,
title={Contrastive multiview coding},
author={Tian, Yonglong and Krishnan, Dilip and Isola, Phillip},
booktitle={Computer Vision--ECCV 2020: 16th European Conference, Glasgow, UK, August 23--28, 2020, Proceedings, Part XI 16},
pages={776--794},
year={2020},
organization={Springer}
}
```
### Contributions
Thanks to the 🤗 authors for the [imagenet-1k](https://huggingface.co/datasets/imagenet-1k) dataset which was used as a reference.
| clane9/imagenet-100 | [
"arxiv:1906.05849",
"region:us"
] | 2023-12-21T22:56:02+00:00 | {"dataset_info": {"features": [{"name": "image", "dtype": "image"}, {"name": "label", "dtype": {"class_label": {"names": {"0": "bonnet, poke bonnet", "1": "green mamba", "2": "langur", "3": "Doberman, Doberman pinscher", "4": "gyromitra", "5": "Saluki, gazelle hound", "6": "vacuum, vacuum cleaner", "7": "window screen", "8": "cocktail shaker", "9": "garden spider, Aranea diademata", "10": "garter snake, grass snake", "11": "carbonara", "12": "pineapple, ananas", "13": "computer keyboard, keypad", "14": "tripod", "15": "komondor", "16": "American lobster, Northern lobster, Maine lobster, Homarus americanus", "17": "bannister, banister, balustrade, balusters, handrail", "18": "honeycomb", "19": "tile roof", "20": "papillon", "21": "boathouse", "22": "stinkhorn, carrion fungus", "23": "jean, blue jean, denim", "24": "Chihuahua", "25": "Chesapeake Bay retriever", "26": "robin, American robin, Turdus migratorius", "27": "tub, vat", "28": "Great Dane", "29": "rotisserie", "30": "bottlecap", "31": "throne", "32": "little blue heron, Egretta caerulea", "33": "rock crab, Cancer irroratus", "34": "Rottweiler", "35": "lorikeet", "36": "Gila monster, Heloderma suspectum", "37": "head cabbage", "38": "car wheel", "39": "coyote, prairie wolf, brush wolf, Canis latrans", "40": "moped", "41": "milk can", "42": "mixing bowl", "43": "toy terrier", "44": "chocolate sauce, chocolate syrup", "45": "rocking chair, rocker", "46": "wing", "47": "park bench", "48": "ambulance", "49": "football helmet", "50": "leafhopper", "51": "cauliflower", "52": "pirate, pirate ship", "53": "purse", "54": "hare", "55": "lampshade, lamp shade", "56": "fiddler crab", "57": "standard poodle", "58": "Shih-Tzu", "59": "pedestal, plinth, footstall", "60": "gibbon, Hylobates lar", "61": "safety pin", "62": "English foxhound", "63": "chime, bell, gong", "64": "American Staffordshire terrier, Staffordshire terrier, American pit bull terrier, pit bull terrier", "65": "bassinet", "66": "wild boar, boar, Sus scrofa", "67": "theater curtain, theatre curtain", "68": "dung beetle", "69": "hognose snake, puff adder, sand viper", "70": "Mexican hairless", "71": "mortarboard", "72": "Walker hound, Walker foxhound", "73": "red fox, Vulpes vulpes", "74": "modem", "75": "slide rule, slipstick", "76": "walking stick, walkingstick, stick insect", "77": "cinema, movie theater, movie theatre, movie house, picture palace", "78": "meerkat, mierkat", "79": "kuvasz", "80": "obelisk", "81": "harmonica, mouth organ, harp, mouth harp", "82": "sarong", "83": "mousetrap", "84": "hard disc, hard disk, fixed disk", "85": "American coot, marsh hen, mud hen, water hen, Fulica americana", "86": "reel", "87": "pickup, pickup truck", "88": "iron, smoothing iron", "89": "tabby, tabby cat", "90": "ski mask", "91": "vizsla, Hungarian pointer", "92": "laptop, laptop computer", "93": "stretcher", "94": "Dutch oven", "95": "African hunting dog, hyena dog, Cape hunting dog, Lycaon pictus", "96": "boxer", "97": "gasmask, respirator, gas helmet", "98": "goose", "99": "borzoi, Russian wolfhound"}}}}], "splits": [{"name": "train", "num_bytes": 8091813320.875, "num_examples": 126689}, {"name": "validation", "num_bytes": 314447246.0, "num_examples": 5000}], "download_size": 8406986315, "dataset_size": 8406260566.875}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "validation", "path": "data/validation-*"}]}]} | 2023-12-22T01:42:26+00:00 | [
"1906.05849"
] | [] | TAGS
#arxiv-1906.05849 #region-us
| Dataset Card for ImageNet-100
=============================
ImageNet-100 is a subset of the original ImageNet-1k dataset containing 100 randomly selected classes. In addition, the images have been resized to 160 pixels on the shorter side.
* Homepage: URL
* Paper: URL
Dataset Structure
-----------------
### Data Instances
An example looks like below:
### Data Fields
The data instances have the following fields:
* 'image': A 'PIL.Image.Image' object containing the image.
* 'label': an 'int' classification label.
The labels are indexed based on the sorted list of synset ids in URL which we automatically map to original class names.
### Data Splits
Additional Information
----------------------
### Licensing Information
In exchange for permission to use the ImageNet database (the "Database") at Princeton University and Stanford University, Researcher hereby agrees to the following terms and conditions:
1. Researcher shall use the Database only for non-commercial research and educational purposes.
2. Princeton University and Stanford University make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose.
3. Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify the ImageNet team, Princeton University, and Stanford University, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the Database, including but not limited to Researcher's use of any copies of copyrighted images that he or she may create from the Database.
4. Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions.
5. Princeton University and Stanford University reserve the right to terminate Researcher's access to the Database at any time.
6. If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer.
7. The law of the State of New Jersey shall apply to all disputes under this agreement.
### Contributions
Thanks to the authors for the imagenet-1k dataset which was used as a reference.
| [
"### Data Instances\n\n\nAn example looks like below:",
"### Data Fields\n\n\nThe data instances have the following fields:\n\n\n* 'image': A 'PIL.Image.Image' object containing the image.\n* 'label': an 'int' classification label.\n\n\nThe labels are indexed based on the sorted list of synset ids in URL which we automatically map to original class names.",
"### Data Splits\n\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nIn exchange for permission to use the ImageNet database (the \"Database\") at Princeton University and Stanford University, Researcher hereby agrees to the following terms and conditions:\n\n\n1. Researcher shall use the Database only for non-commercial research and educational purposes.\n2. Princeton University and Stanford University make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose.\n3. Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify the ImageNet team, Princeton University, and Stanford University, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the Database, including but not limited to Researcher's use of any copies of copyrighted images that he or she may create from the Database.\n4. Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions.\n5. Princeton University and Stanford University reserve the right to terminate Researcher's access to the Database at any time.\n6. If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer.\n7. The law of the State of New Jersey shall apply to all disputes under this agreement.",
"### Contributions\n\n\nThanks to the authors for the imagenet-1k dataset which was used as a reference."
] | [
"TAGS\n#arxiv-1906.05849 #region-us \n",
"### Data Instances\n\n\nAn example looks like below:",
"### Data Fields\n\n\nThe data instances have the following fields:\n\n\n* 'image': A 'PIL.Image.Image' object containing the image.\n* 'label': an 'int' classification label.\n\n\nThe labels are indexed based on the sorted list of synset ids in URL which we automatically map to original class names.",
"### Data Splits\n\n\n\nAdditional Information\n----------------------",
"### Licensing Information\n\n\nIn exchange for permission to use the ImageNet database (the \"Database\") at Princeton University and Stanford University, Researcher hereby agrees to the following terms and conditions:\n\n\n1. Researcher shall use the Database only for non-commercial research and educational purposes.\n2. Princeton University and Stanford University make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose.\n3. Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify the ImageNet team, Princeton University, and Stanford University, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the Database, including but not limited to Researcher's use of any copies of copyrighted images that he or she may create from the Database.\n4. Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions.\n5. Princeton University and Stanford University reserve the right to terminate Researcher's access to the Database at any time.\n6. If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer.\n7. The law of the State of New Jersey shall apply to all disputes under this agreement.",
"### Contributions\n\n\nThanks to the authors for the imagenet-1k dataset which was used as a reference."
] | [
14,
12,
75,
12,
327,
25
] | [
"passage: TAGS\n#arxiv-1906.05849 #region-us \n### Data Instances\n\n\nAn example looks like below:### Data Fields\n\n\nThe data instances have the following fields:\n\n\n* 'image': A 'PIL.Image.Image' object containing the image.\n* 'label': an 'int' classification label.\n\n\nThe labels are indexed based on the sorted list of synset ids in URL which we automatically map to original class names.### Data Splits\n\n\n\nAdditional Information\n----------------------### Licensing Information\n\n\nIn exchange for permission to use the ImageNet database (the \"Database\") at Princeton University and Stanford University, Researcher hereby agrees to the following terms and conditions:\n\n\n1. Researcher shall use the Database only for non-commercial research and educational purposes.\n2. Princeton University and Stanford University make no representations or warranties regarding the Database, including but not limited to warranties of non-infringement or fitness for a particular purpose.\n3. Researcher accepts full responsibility for his or her use of the Database and shall defend and indemnify the ImageNet team, Princeton University, and Stanford University, including their employees, Trustees, officers and agents, against any and all claims arising from Researcher's use of the Database, including but not limited to Researcher's use of any copies of copyrighted images that he or she may create from the Database.\n4. Researcher may provide research associates and colleagues with access to the Database provided that they first agree to be bound by these terms and conditions.\n5. Princeton University and Stanford University reserve the right to terminate Researcher's access to the Database at any time.\n6. If Researcher is employed by a for-profit, commercial entity, Researcher's employer shall also be bound by these terms and conditions, and Researcher hereby represents that he or she is fully authorized to enter into this agreement on behalf of such employer.\n7. The law of the State of New Jersey shall apply to all disputes under this agreement.### Contributions\n\n\nThanks to the authors for the imagenet-1k dataset which was used as a reference."
] |
20df33f8d7bc245243bf1d337139aacbef4dd448 | # Dataset Card for "oss_bugs"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Partha117/oss_bugs | [
"region:us"
] | 2023-12-22T00:50:11+00:00 | {"dataset_info": {"features": [{"name": "status", "dtype": "string"}, {"name": "repo_name", "dtype": "string"}, {"name": "repo_url", "dtype": "string"}, {"name": "issue_id", "dtype": "int64"}, {"name": "updated_files", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "body", "dtype": "string"}, {"name": "issue_url", "dtype": "string"}, {"name": "pull_url", "dtype": "string"}, {"name": "before_fix_sha", "dtype": "string"}, {"name": "after_fix_sha", "dtype": "string"}, {"name": "report_datetime", "dtype": "timestamp[ns, tz=UTC]"}, {"name": "commit_datetime", "dtype": "timestamp[us, tz=+01:00]"}, {"name": "language", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 44511029, "num_examples": 14762}], "download_size": 15609483, "dataset_size": 44511029}} | 2023-12-22T03:51:13+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "oss_bugs"
More Information needed | [
"# Dataset Card for \"oss_bugs\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"oss_bugs\"\n\nMore Information needed"
] | [
6,
15
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"oss_bugs\"\n\nMore Information needed"
] |
277dc703fd78614635e86fe57c636b54931538b2 | # PIE Dataset Card for "abstrct"
This is a [PyTorch-IE](https://github.com/ChristophAlt/pytorch-ie) wrapper for the AbstRCT dataset ([paper](https://ebooks.iospress.nl/publication/55129) and [data repository](https://gitlab.com/tomaye/abstrct)). Since the AbstRCT dataset is published in the [BRAT standoff format](https://brat.nlplab.org/standoff.html), this dataset builder is based on the [PyTorch-IE brat dataset loading script](https://huggingface.co/datasets/pie/brat).
Therefore, the `abstrct` dataset as described here follows the data structure from the [PIE brat dataset card](https://huggingface.co/datasets/pie/brat).
### Dataset Summary
A novel corpus of healthcare texts (i.e., RCT abstracts on various diseases) from the MEDLINE database, which
are annotated with argumentative components (i.e., `MajorClaim`, `Claim`, and `Premise`) and relations (i.e., `Support`, `Attack`, and `Partial-attack`),
in order to support clinicians' daily tasks in information finding and evidence-based reasoning for decision making.
### Supported Tasks and Leaderboards
- **Tasks**: Argumentation Mining, Component Identification, Boundary Detection, Relation Identification, Link Prediction
- **Leaderboard:** [More Information Needed](https://github.com/huggingface/datasets/blob/master/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards)
### Languages
The language in the dataset is English (in the medical/healthcare domain).
### Dataset Variants
The `abstrct` dataset comes in a single version (`default`) with `BratDocumentWithMergedSpans` as document type. Note,
that this in contrast to the base `brat` dataset, where the document type for the `default` variant is `BratDocument`.
The reason is that the AbstRCT dataset has already been published with only single-fragment spans.
Without any need to merge fragments, the document type `BratDocumentWithMergedSpans` is easier to handle for most of the task modules.
### Data Schema
See [PIE-Brat Data Schema](https://huggingface.co/datasets/pie/brat#data-schema).
### Usage
```python
from pie_datasets import load_dataset, builders
# load default version
datasets = load_dataset("pie/abstrct")
doc = datasets["neoplasm_train"][0]
assert isinstance(doc, builders.brat.BratDocumentWithMergedSpans)
```
### Document Converters
The dataset provides document converters for the following target document types:
- `pytorch_ie.documents.TextDocumentWithLabeledSpansAndBinaryRelations`
- `LabeledSpans`, converted from `BratDocumentWithMergedSpans`'s `spans`
- labels: `MajorClaim`, `Claim`, `Premise`
- `BinraryRelations`, converted from `BratDocumentWithMergedSpans`'s `relations`
- labels: `Support`, `Partial-Attack`, `Attack`
See [here](https://github.com/ChristophAlt/pytorch-ie/blob/main/src/pytorch_ie/documents.py) for the document type
definitions.
### Data Splits
| Diseease-based Split | `neoplasm` | `glaucoma` | `mixed` |
| --------------------------------------------------------- | ----------------------: | -------------------: | -------------------: |
| No.of document <br/>- `_train`<br/>- `_dev`<br/>- `_test` | <br/>350<br/>50<br/>100 | <br/> <br/> <br/>100 | <br/> <br/> <br/>100 |
**Important Note**:
- `mixed_test` contains 20 abstracts on the following diseases: glaucoma, neoplasm, diabetes, hypertension, hepatitis.
- 31 out of 40 abstracts in `mixed_test` overlap with abstracts in `neoplasm_test` and `glaucoma_test`.
### Label Descriptions
In this section, we describe labels according to [Mayer et al. (2020)](https://ebooks.iospress.nl/publication/55129), as well as our label counts on 669 abstracts.
Unfortunately, the number we report does not correspond to what Mayer et al. reported in their paper (see Table 1, p. 2109).
Morio et al. ([2022](https://aclanthology.org/2022.tacl-1.37.pdf); p. 642, Table 1), who utilized this corpus for their AM tasks, also reported another number, claiming there were double annotation errors in the original statistic collection (see [reference](https://github.com/hitachi-nlp/graph_parser/blob/main/examples/multitask_am/README.md#qas)).
#### Components
| Components | Count | Percentage |
| ------------ | ----: | ---------: |
| `MajorClaim` | 129 | 3 % |
| `Claim` | 1282 | 30.2 % |
| `Premise` | 2842 | 66.8 % |
- `MajorClaim` are more general/concluding `claim`'s, which is supported by more specific claims
- `Claim` is a concluding statement made by the author about the outcome of the study. Claims only points to other claims.
- `Premise` (a.k.a. evidence) is an observation or measurement in the study, which supports or attacks another argument component, usually a `claim`. They are observed facts, and therefore credible without further justifications, as this is the ground truth the argumentation is based on.
(Mayer et al. 2020, p.2110)
#### Relations
| Relations | Count | Percentage |
| ------------------------ | ----: | ---------: |
| support: `Support` | 2289 | 87 % |
| attack: `Partial-Attack` | 275 | 10.4 % |
| attack: `Attack` | 69 | 2.6 % |
- `Support`: All statements or observations justifying the proposition of the target component
- `Partial-Attack`: when the source component is not in full contradiction, but weakening the target component by constraining its proposition. Usually occur between two claims
- `Attack`: A component is attacking another one, if it is
- i) contradicting the proposition of the target component, or
- ii) undercutting its implicit assumption of significance constraints
- `Premise` can only be connected to either `Claim` or another `Premise`
- `Claim`'s can only point to other `Claim`'s
- There might be more than one **outgoing** and/or **incoming relation** . In rare case, there is no relation to another component at all.
(Mayer et al. 2020, p.2110)
## Dataset Creation
### Curation Rationale
"\[D\]espite its natural employment in healthcare applications, only few approaches have applied AM methods to this kind
of text, and their contribution is limited to the detection
of argument components, disregarding the more complex phase of
predicting the relations among them. In addition, no huge annotated
dataset for AM is available for the healthcare domain (p. 2108)...to support clinicians in decision making or in (semi)-automatically
filling evidence tables for systematic reviews in evidence-based medicine. (p. 2114)"
### Source Data
[MEDLINE database](https://www.nlm.nih.gov/medline/medline_overview.html)
#### Initial Data Collection and Normalization
Extended from the previous dataset in [Mayer et al. 2018](https://webusers.i3s.unice.fr/~riveill/IADB/publications/2018-COMMA.pdf), 500 medical abstract from randomized controlled trials (RCTs) were retrieved directly from [PubMed](https://www.ncbi.nlm.nih.gov/pubmed/) by searching for titles or abstracts containing the disease name.
(See the definition of RCT in the authors' [guideline](https://gitlab.com/tomaye/abstrct/-/blob/master/AbstRCT_corpus/AnnotationGuidelines.pdf) (Section 1.2) and [US National Library of Medicine](https://www.ncbi.nlm.nih.gov/pmc/articles/PMC6235704/))
#### Who are the source language producers?
\[More Information Needed\]
### Annotations
#### Annotation process
"An expert in the medical domain (a pharmacist) validated the annotation
guidelines before starting the annotation process." (p. 2110)
"Annotation was started after a training phase, where amongst others the component boundaries were topic of discussion. Gold labels
were set after a reconciliation phase, during which the annotators
tried to reach an agreement. While the number of annotators vary for
the two annotation phases (component and relation annotation).
On the annotation of argument components, "IAA among the three annotators has been calculated
on 30 abstracts, resulting in a Fleiss’ kappa of 0.72 for argumentative
components and 0.68 for the more fine-grained distinction between
claims and evidence." (p. 2109)
On the annotation of argumentative relation, "IAA has been calculated on 30 abstracts annotated in parallel by three annotators,
resulting in a Fleiss’ kappa of
0.62. The annotation of the remaining abstracts was carried out by
one of the above mentioned annotators." (p. 2110)
See the [Annotation Guideline](https://gitlab.com/tomaye/abstrct/-/blob/master/AbstRCT_corpus/AnnotationGuidelines.pdf?ref_type=heads) for more information on definitions and annotated samples.
#### Who are the annotators?
Two annotators with background in computational linguistics. No information was given on the third annotator.
### Personal and Sensitive Information
\[More Information Needed\]
## Considerations for Using the Data
### Social Impact of Dataset
"These \[*intelligent*\] systems apply to clinical trials,
clinical guidelines, and electronic health records, and their solutions range from the automated detection of PICO elements
in health records to evidence-based reasoning for decision making. These applications highlight the need of clinicians to be supplied with frameworks able to extract, from the huge
quantity of data available for the different diseases and treatments,
the exact information they necessitate and to present this information in a structured way, easy to be (possibly semi-automatically)
analyzed...Given its aptness to automatically detect in text those
argumentative structures that are at the basis of evidence-based reasoning applications, AM represents a potential valuable contribution
in the healthcare domain." (p. 2108)
"We expect that our work will have a large impact for clinicians as it
is a crucial step towards AI supported clinical deliberation at a large
scale." (p. 2114)
### Discussion of Biases
\[More Information Needed\]
### Other Known Limitations
\[More Information Needed\]
## Additional Information
### Dataset Curators
\[More Information Needed\]
### Licensing Information
- **License**: the AbstRCT dataset is released under a [Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License](https://creativecommons.org/licenses/by-nc-sa/4.0/legalcode)
- **Funding**: This work is partly funded by the French government labelled PIA
program under its IDEX UCA JEDI project (ANR-15-IDEX-0001).
This work has been supported by the French government, through the
3IA Cote d’Azur Investments in the Future project managed by the
National Research Agency (ANR) with the reference number ANR19-P3IA-0002
### Citation Information
```
@inproceedings{mayer2020ecai,
author = {Tobias Mayer and
Elena Cabrio and
Serena Villata},
title = {Transformer-Based Argument Mining for Healthcare Applications},
booktitle = {{ECAI} 2020 - 24th European Conference on Artificial Intelligence},
series = {Frontiers in Artificial Intelligence and Applications},
volume = {325},
pages = {2108--2115},
publisher = {{IOS} Press},
year = {2020},
}
```
### Contributions
Thanks to [@ArneBinder](https://github.com/ArneBinder) and [@idalr](https://github.com/idalr) for adding this dataset.
| pie/abstrct | [
"region:us"
] | 2023-12-22T01:29:38+00:00 | {} | 2023-12-22T01:30:03+00:00 | [] | [] | TAGS
#region-us
| PIE Dataset Card for "abstrct"
==============================
This is a PyTorch-IE wrapper for the AbstRCT dataset (paper and data repository). Since the AbstRCT dataset is published in the BRAT standoff format, this dataset builder is based on the PyTorch-IE brat dataset loading script.
Therefore, the 'abstrct' dataset as described here follows the data structure from the PIE brat dataset card.
### Dataset Summary
A novel corpus of healthcare texts (i.e., RCT abstracts on various diseases) from the MEDLINE database, which
are annotated with argumentative components (i.e., 'MajorClaim', 'Claim', and 'Premise') and relations (i.e., 'Support', 'Attack', and 'Partial-attack'),
in order to support clinicians' daily tasks in information finding and evidence-based reasoning for decision making.
### Supported Tasks and Leaderboards
* Tasks: Argumentation Mining, Component Identification, Boundary Detection, Relation Identification, Link Prediction
* Leaderboard:
### Languages
The language in the dataset is English (in the medical/healthcare domain).
### Dataset Variants
The 'abstrct' dataset comes in a single version ('default') with 'BratDocumentWithMergedSpans' as document type. Note,
that this in contrast to the base 'brat' dataset, where the document type for the 'default' variant is 'BratDocument'.
The reason is that the AbstRCT dataset has already been published with only single-fragment spans.
Without any need to merge fragments, the document type 'BratDocumentWithMergedSpans' is easier to handle for most of the task modules.
### Data Schema
See PIE-Brat Data Schema.
### Usage
### Document Converters
The dataset provides document converters for the following target document types:
* 'pytorch\_ie.documents.TextDocumentWithLabeledSpansAndBinaryRelations'
+ 'LabeledSpans', converted from 'BratDocumentWithMergedSpans''s 'spans'
- labels: 'MajorClaim', 'Claim', 'Premise'
+ 'BinraryRelations', converted from 'BratDocumentWithMergedSpans''s 'relations'
- labels: 'Support', 'Partial-Attack', 'Attack'
See here for the document type
definitions.
### Data Splits
Important Note:
* 'mixed\_test' contains 20 abstracts on the following diseases: glaucoma, neoplasm, diabetes, hypertension, hepatitis.
* 31 out of 40 abstracts in 'mixed\_test' overlap with abstracts in 'neoplasm\_test' and 'glaucoma\_test'.
### Label Descriptions
In this section, we describe labels according to Mayer et al. (2020), as well as our label counts on 669 abstracts.
Unfortunately, the number we report does not correspond to what Mayer et al. reported in their paper (see Table 1, p. 2109).
Morio et al. (2022; p. 642, Table 1), who utilized this corpus for their AM tasks, also reported another number, claiming there were double annotation errors in the original statistic collection (see reference).
#### Components
* 'MajorClaim' are more general/concluding 'claim''s, which is supported by more specific claims
* 'Claim' is a concluding statement made by the author about the outcome of the study. Claims only points to other claims.
* 'Premise' (a.k.a. evidence) is an observation or measurement in the study, which supports or attacks another argument component, usually a 'claim'. They are observed facts, and therefore credible without further justifications, as this is the ground truth the argumentation is based on.
(Mayer et al. 2020, p.2110)
#### Relations
* 'Support': All statements or observations justifying the proposition of the target component
* 'Partial-Attack': when the source component is not in full contradiction, but weakening the target component by constraining its proposition. Usually occur between two claims
* 'Attack': A component is attacking another one, if it is
+ i) contradicting the proposition of the target component, or
+ ii) undercutting its implicit assumption of significance constraints
* 'Premise' can only be connected to either 'Claim' or another 'Premise'
* 'Claim''s can only point to other 'Claim''s
* There might be more than one outgoing and/or incoming relation . In rare case, there is no relation to another component at all.
(Mayer et al. 2020, p.2110)
Dataset Creation
----------------
### Curation Rationale
"[D]espite its natural employment in healthcare applications, only few approaches have applied AM methods to this kind
of text, and their contribution is limited to the detection
of argument components, disregarding the more complex phase of
predicting the relations among them. In addition, no huge annotated
dataset for AM is available for the healthcare domain (p. 2108)...to support clinicians in decision making or in (semi)-automatically
filling evidence tables for systematic reviews in evidence-based medicine. (p. 2114)"
### Source Data
MEDLINE database
#### Initial Data Collection and Normalization
Extended from the previous dataset in Mayer et al. 2018, 500 medical abstract from randomized controlled trials (RCTs) were retrieved directly from PubMed by searching for titles or abstracts containing the disease name.
(See the definition of RCT in the authors' guideline (Section 1.2) and US National Library of Medicine)
#### Who are the source language producers?
\]
### Annotations
#### Annotation process
"An expert in the medical domain (a pharmacist) validated the annotation
guidelines before starting the annotation process." (p. 2110)
"Annotation was started after a training phase, where amongst others the component boundaries were topic of discussion. Gold labels
were set after a reconciliation phase, during which the annotators
tried to reach an agreement. While the number of annotators vary for
the two annotation phases (component and relation annotation).
On the annotation of argument components, "IAA among the three annotators has been calculated
on 30 abstracts, resulting in a Fleiss’ kappa of 0.72 for argumentative
components and 0.68 for the more fine-grained distinction between
claims and evidence." (p. 2109)
On the annotation of argumentative relation, "IAA has been calculated on 30 abstracts annotated in parallel by three annotators,
resulting in a Fleiss’ kappa of
0.62. The annotation of the remaining abstracts was carried out by
one of the above mentioned annotators." (p. 2110)
See the Annotation Guideline for more information on definitions and annotated samples.
#### Who are the annotators?
Two annotators with background in computational linguistics. No information was given on the third annotator.
### Personal and Sensitive Information
\]
Considerations for Using the Data
---------------------------------
### Social Impact of Dataset
"These [*intelligent*] systems apply to clinical trials,
clinical guidelines, and electronic health records, and their solutions range from the automated detection of PICO elements
in health records to evidence-based reasoning for decision making. These applications highlight the need of clinicians to be supplied with frameworks able to extract, from the huge
quantity of data available for the different diseases and treatments,
the exact information they necessitate and to present this information in a structured way, easy to be (possibly semi-automatically)
analyzed...Given its aptness to automatically detect in text those
argumentative structures that are at the basis of evidence-based reasoning applications, AM represents a potential valuable contribution
in the healthcare domain." (p. 2108)
"We expect that our work will have a large impact for clinicians as it
is a crucial step towards AI supported clinical deliberation at a large
scale." (p. 2114)
### Discussion of Biases
\]
### Other Known Limitations
\]
Additional Information
----------------------
### Dataset Curators
\]
### Licensing Information
* License: the AbstRCT dataset is released under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License
* Funding: This work is partly funded by the French government labelled PIA
program under its IDEX UCA JEDI project (ANR-15-IDEX-0001).
This work has been supported by the French government, through the
3IA Cote d’Azur Investments in the Future project managed by the
National Research Agency (ANR) with the reference number ANR19-P3IA-0002
### Contributions
Thanks to @ArneBinder and @idalr for adding this dataset.
| [
"### Dataset Summary\n\n\nA novel corpus of healthcare texts (i.e., RCT abstracts on various diseases) from the MEDLINE database, which\nare annotated with argumentative components (i.e., 'MajorClaim', 'Claim', and 'Premise') and relations (i.e., 'Support', 'Attack', and 'Partial-attack'),\nin order to support clinicians' daily tasks in information finding and evidence-based reasoning for decision making.",
"### Supported Tasks and Leaderboards\n\n\n* Tasks: Argumentation Mining, Component Identification, Boundary Detection, Relation Identification, Link Prediction\n* Leaderboard:",
"### Languages\n\n\nThe language in the dataset is English (in the medical/healthcare domain).",
"### Dataset Variants\n\n\nThe 'abstrct' dataset comes in a single version ('default') with 'BratDocumentWithMergedSpans' as document type. Note,\nthat this in contrast to the base 'brat' dataset, where the document type for the 'default' variant is 'BratDocument'.\nThe reason is that the AbstRCT dataset has already been published with only single-fragment spans.\nWithout any need to merge fragments, the document type 'BratDocumentWithMergedSpans' is easier to handle for most of the task modules.",
"### Data Schema\n\n\nSee PIE-Brat Data Schema.",
"### Usage",
"### Document Converters\n\n\nThe dataset provides document converters for the following target document types:\n\n\n* 'pytorch\\_ie.documents.TextDocumentWithLabeledSpansAndBinaryRelations'\n\t+ 'LabeledSpans', converted from 'BratDocumentWithMergedSpans''s 'spans'\n\t\t- labels: 'MajorClaim', 'Claim', 'Premise'\n\t+ 'BinraryRelations', converted from 'BratDocumentWithMergedSpans''s 'relations'\n\t\t- labels: 'Support', 'Partial-Attack', 'Attack'\n\n\nSee here for the document type\ndefinitions.",
"### Data Splits\n\n\n\nImportant Note:\n\n\n* 'mixed\\_test' contains 20 abstracts on the following diseases: glaucoma, neoplasm, diabetes, hypertension, hepatitis.\n* 31 out of 40 abstracts in 'mixed\\_test' overlap with abstracts in 'neoplasm\\_test' and 'glaucoma\\_test'.",
"### Label Descriptions\n\n\nIn this section, we describe labels according to Mayer et al. (2020), as well as our label counts on 669 abstracts.\n\n\nUnfortunately, the number we report does not correspond to what Mayer et al. reported in their paper (see Table 1, p. 2109).\nMorio et al. (2022; p. 642, Table 1), who utilized this corpus for their AM tasks, also reported another number, claiming there were double annotation errors in the original statistic collection (see reference).",
"#### Components\n\n\n\n* 'MajorClaim' are more general/concluding 'claim''s, which is supported by more specific claims\n* 'Claim' is a concluding statement made by the author about the outcome of the study. Claims only points to other claims.\n* 'Premise' (a.k.a. evidence) is an observation or measurement in the study, which supports or attacks another argument component, usually a 'claim'. They are observed facts, and therefore credible without further justifications, as this is the ground truth the argumentation is based on.\n\n\n(Mayer et al. 2020, p.2110)",
"#### Relations\n\n\n\n* 'Support': All statements or observations justifying the proposition of the target component\n* 'Partial-Attack': when the source component is not in full contradiction, but weakening the target component by constraining its proposition. Usually occur between two claims\n* 'Attack': A component is attacking another one, if it is\n\t+ i) contradicting the proposition of the target component, or\n\t+ ii) undercutting its implicit assumption of significance constraints\n* 'Premise' can only be connected to either 'Claim' or another 'Premise'\n* 'Claim''s can only point to other 'Claim''s\n* There might be more than one outgoing and/or incoming relation . In rare case, there is no relation to another component at all.\n\n\n(Mayer et al. 2020, p.2110)\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\n\"[D]espite its natural employment in healthcare applications, only few approaches have applied AM methods to this kind\nof text, and their contribution is limited to the detection\nof argument components, disregarding the more complex phase of\npredicting the relations among them. In addition, no huge annotated\ndataset for AM is available for the healthcare domain (p. 2108)...to support clinicians in decision making or in (semi)-automatically\nfilling evidence tables for systematic reviews in evidence-based medicine. (p. 2114)\"",
"### Source Data\n\n\nMEDLINE database",
"#### Initial Data Collection and Normalization\n\n\nExtended from the previous dataset in Mayer et al. 2018, 500 medical abstract from randomized controlled trials (RCTs) were retrieved directly from PubMed by searching for titles or abstracts containing the disease name.\n\n\n(See the definition of RCT in the authors' guideline (Section 1.2) and US National Library of Medicine)",
"#### Who are the source language producers?\n\n\n\\]",
"### Annotations",
"#### Annotation process\n\n\n\"An expert in the medical domain (a pharmacist) validated the annotation\nguidelines before starting the annotation process.\" (p. 2110)\n\n\n\"Annotation was started after a training phase, where amongst others the component boundaries were topic of discussion. Gold labels\nwere set after a reconciliation phase, during which the annotators\ntried to reach an agreement. While the number of annotators vary for\nthe two annotation phases (component and relation annotation).\n\n\nOn the annotation of argument components, \"IAA among the three annotators has been calculated\non 30 abstracts, resulting in a Fleiss’ kappa of 0.72 for argumentative\ncomponents and 0.68 for the more fine-grained distinction between\nclaims and evidence.\" (p. 2109)\n\n\nOn the annotation of argumentative relation, \"IAA has been calculated on 30 abstracts annotated in parallel by three annotators,\nresulting in a Fleiss’ kappa of\n0.62. The annotation of the remaining abstracts was carried out by\none of the above mentioned annotators.\" (p. 2110)\n\n\nSee the Annotation Guideline for more information on definitions and annotated samples.",
"#### Who are the annotators?\n\n\nTwo annotators with background in computational linguistics. No information was given on the third annotator.",
"### Personal and Sensitive Information\n\n\n\\]\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\n\"These [*intelligent*] systems apply to clinical trials,\nclinical guidelines, and electronic health records, and their solutions range from the automated detection of PICO elements\nin health records to evidence-based reasoning for decision making. These applications highlight the need of clinicians to be supplied with frameworks able to extract, from the huge\nquantity of data available for the different diseases and treatments,\nthe exact information they necessitate and to present this information in a structured way, easy to be (possibly semi-automatically)\nanalyzed...Given its aptness to automatically detect in text those\nargumentative structures that are at the basis of evidence-based reasoning applications, AM represents a potential valuable contribution\nin the healthcare domain.\" (p. 2108)\n\n\n\"We expect that our work will have a large impact for clinicians as it\nis a crucial step towards AI supported clinical deliberation at a large\nscale.\" (p. 2114)",
"### Discussion of Biases\n\n\n\\]",
"### Other Known Limitations\n\n\n\\]\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\n\\]",
"### Licensing Information\n\n\n* License: the AbstRCT dataset is released under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License\n* Funding: This work is partly funded by the French government labelled PIA\nprogram under its IDEX UCA JEDI project (ANR-15-IDEX-0001).\nThis work has been supported by the French government, through the\n3IA Cote d’Azur Investments in the Future project managed by the\nNational Research Agency (ANR) with the reference number ANR19-P3IA-0002",
"### Contributions\n\n\nThanks to @ArneBinder and @idalr for adding this dataset."
] | [
"TAGS\n#region-us \n",
"### Dataset Summary\n\n\nA novel corpus of healthcare texts (i.e., RCT abstracts on various diseases) from the MEDLINE database, which\nare annotated with argumentative components (i.e., 'MajorClaim', 'Claim', and 'Premise') and relations (i.e., 'Support', 'Attack', and 'Partial-attack'),\nin order to support clinicians' daily tasks in information finding and evidence-based reasoning for decision making.",
"### Supported Tasks and Leaderboards\n\n\n* Tasks: Argumentation Mining, Component Identification, Boundary Detection, Relation Identification, Link Prediction\n* Leaderboard:",
"### Languages\n\n\nThe language in the dataset is English (in the medical/healthcare domain).",
"### Dataset Variants\n\n\nThe 'abstrct' dataset comes in a single version ('default') with 'BratDocumentWithMergedSpans' as document type. Note,\nthat this in contrast to the base 'brat' dataset, where the document type for the 'default' variant is 'BratDocument'.\nThe reason is that the AbstRCT dataset has already been published with only single-fragment spans.\nWithout any need to merge fragments, the document type 'BratDocumentWithMergedSpans' is easier to handle for most of the task modules.",
"### Data Schema\n\n\nSee PIE-Brat Data Schema.",
"### Usage",
"### Document Converters\n\n\nThe dataset provides document converters for the following target document types:\n\n\n* 'pytorch\\_ie.documents.TextDocumentWithLabeledSpansAndBinaryRelations'\n\t+ 'LabeledSpans', converted from 'BratDocumentWithMergedSpans''s 'spans'\n\t\t- labels: 'MajorClaim', 'Claim', 'Premise'\n\t+ 'BinraryRelations', converted from 'BratDocumentWithMergedSpans''s 'relations'\n\t\t- labels: 'Support', 'Partial-Attack', 'Attack'\n\n\nSee here for the document type\ndefinitions.",
"### Data Splits\n\n\n\nImportant Note:\n\n\n* 'mixed\\_test' contains 20 abstracts on the following diseases: glaucoma, neoplasm, diabetes, hypertension, hepatitis.\n* 31 out of 40 abstracts in 'mixed\\_test' overlap with abstracts in 'neoplasm\\_test' and 'glaucoma\\_test'.",
"### Label Descriptions\n\n\nIn this section, we describe labels according to Mayer et al. (2020), as well as our label counts on 669 abstracts.\n\n\nUnfortunately, the number we report does not correspond to what Mayer et al. reported in their paper (see Table 1, p. 2109).\nMorio et al. (2022; p. 642, Table 1), who utilized this corpus for their AM tasks, also reported another number, claiming there were double annotation errors in the original statistic collection (see reference).",
"#### Components\n\n\n\n* 'MajorClaim' are more general/concluding 'claim''s, which is supported by more specific claims\n* 'Claim' is a concluding statement made by the author about the outcome of the study. Claims only points to other claims.\n* 'Premise' (a.k.a. evidence) is an observation or measurement in the study, which supports or attacks another argument component, usually a 'claim'. They are observed facts, and therefore credible without further justifications, as this is the ground truth the argumentation is based on.\n\n\n(Mayer et al. 2020, p.2110)",
"#### Relations\n\n\n\n* 'Support': All statements or observations justifying the proposition of the target component\n* 'Partial-Attack': when the source component is not in full contradiction, but weakening the target component by constraining its proposition. Usually occur between two claims\n* 'Attack': A component is attacking another one, if it is\n\t+ i) contradicting the proposition of the target component, or\n\t+ ii) undercutting its implicit assumption of significance constraints\n* 'Premise' can only be connected to either 'Claim' or another 'Premise'\n* 'Claim''s can only point to other 'Claim''s\n* There might be more than one outgoing and/or incoming relation . In rare case, there is no relation to another component at all.\n\n\n(Mayer et al. 2020, p.2110)\n\n\nDataset Creation\n----------------",
"### Curation Rationale\n\n\n\"[D]espite its natural employment in healthcare applications, only few approaches have applied AM methods to this kind\nof text, and their contribution is limited to the detection\nof argument components, disregarding the more complex phase of\npredicting the relations among them. In addition, no huge annotated\ndataset for AM is available for the healthcare domain (p. 2108)...to support clinicians in decision making or in (semi)-automatically\nfilling evidence tables for systematic reviews in evidence-based medicine. (p. 2114)\"",
"### Source Data\n\n\nMEDLINE database",
"#### Initial Data Collection and Normalization\n\n\nExtended from the previous dataset in Mayer et al. 2018, 500 medical abstract from randomized controlled trials (RCTs) were retrieved directly from PubMed by searching for titles or abstracts containing the disease name.\n\n\n(See the definition of RCT in the authors' guideline (Section 1.2) and US National Library of Medicine)",
"#### Who are the source language producers?\n\n\n\\]",
"### Annotations",
"#### Annotation process\n\n\n\"An expert in the medical domain (a pharmacist) validated the annotation\nguidelines before starting the annotation process.\" (p. 2110)\n\n\n\"Annotation was started after a training phase, where amongst others the component boundaries were topic of discussion. Gold labels\nwere set after a reconciliation phase, during which the annotators\ntried to reach an agreement. While the number of annotators vary for\nthe two annotation phases (component and relation annotation).\n\n\nOn the annotation of argument components, \"IAA among the three annotators has been calculated\non 30 abstracts, resulting in a Fleiss’ kappa of 0.72 for argumentative\ncomponents and 0.68 for the more fine-grained distinction between\nclaims and evidence.\" (p. 2109)\n\n\nOn the annotation of argumentative relation, \"IAA has been calculated on 30 abstracts annotated in parallel by three annotators,\nresulting in a Fleiss’ kappa of\n0.62. The annotation of the remaining abstracts was carried out by\none of the above mentioned annotators.\" (p. 2110)\n\n\nSee the Annotation Guideline for more information on definitions and annotated samples.",
"#### Who are the annotators?\n\n\nTwo annotators with background in computational linguistics. No information was given on the third annotator.",
"### Personal and Sensitive Information\n\n\n\\]\n\n\nConsiderations for Using the Data\n---------------------------------",
"### Social Impact of Dataset\n\n\n\"These [*intelligent*] systems apply to clinical trials,\nclinical guidelines, and electronic health records, and their solutions range from the automated detection of PICO elements\nin health records to evidence-based reasoning for decision making. These applications highlight the need of clinicians to be supplied with frameworks able to extract, from the huge\nquantity of data available for the different diseases and treatments,\nthe exact information they necessitate and to present this information in a structured way, easy to be (possibly semi-automatically)\nanalyzed...Given its aptness to automatically detect in text those\nargumentative structures that are at the basis of evidence-based reasoning applications, AM represents a potential valuable contribution\nin the healthcare domain.\" (p. 2108)\n\n\n\"We expect that our work will have a large impact for clinicians as it\nis a crucial step towards AI supported clinical deliberation at a large\nscale.\" (p. 2114)",
"### Discussion of Biases\n\n\n\\]",
"### Other Known Limitations\n\n\n\\]\n\n\nAdditional Information\n----------------------",
"### Dataset Curators\n\n\n\\]",
"### Licensing Information\n\n\n* License: the AbstRCT dataset is released under a Creative Commons Attribution-NonCommercial-ShareAlike 4.0 International License\n* Funding: This work is partly funded by the French government labelled PIA\nprogram under its IDEX UCA JEDI project (ANR-15-IDEX-0001).\nThis work has been supported by the French government, through the\n3IA Cote d’Azur Investments in the Future project managed by the\nNational Research Agency (ANR) with the reference number ANR19-P3IA-0002",
"### Contributions\n\n\nThanks to @ArneBinder and @idalr for adding this dataset."
] | [
6,
122,
42,
21,
129,
15,
4,
156,
83,
116,
143,
204,
125,
7,
89,
13,
5,
270,
32,
21,
215,
11,
17,
9,
121,
23
] | [
"passage: TAGS\n#region-us \n### Dataset Summary\n\n\nA novel corpus of healthcare texts (i.e., RCT abstracts on various diseases) from the MEDLINE database, which\nare annotated with argumentative components (i.e., 'MajorClaim', 'Claim', and 'Premise') and relations (i.e., 'Support', 'Attack', and 'Partial-attack'),\nin order to support clinicians' daily tasks in information finding and evidence-based reasoning for decision making.### Supported Tasks and Leaderboards\n\n\n* Tasks: Argumentation Mining, Component Identification, Boundary Detection, Relation Identification, Link Prediction\n* Leaderboard:### Languages\n\n\nThe language in the dataset is English (in the medical/healthcare domain).### Dataset Variants\n\n\nThe 'abstrct' dataset comes in a single version ('default') with 'BratDocumentWithMergedSpans' as document type. Note,\nthat this in contrast to the base 'brat' dataset, where the document type for the 'default' variant is 'BratDocument'.\nThe reason is that the AbstRCT dataset has already been published with only single-fragment spans.\nWithout any need to merge fragments, the document type 'BratDocumentWithMergedSpans' is easier to handle for most of the task modules.### Data Schema\n\n\nSee PIE-Brat Data Schema.### Usage### Document Converters\n\n\nThe dataset provides document converters for the following target document types:\n\n\n* 'pytorch\\_ie.documents.TextDocumentWithLabeledSpansAndBinaryRelations'\n\t+ 'LabeledSpans', converted from 'BratDocumentWithMergedSpans''s 'spans'\n\t\t- labels: 'MajorClaim', 'Claim', 'Premise'\n\t+ 'BinraryRelations', converted from 'BratDocumentWithMergedSpans''s 'relations'\n\t\t- labels: 'Support', 'Partial-Attack', 'Attack'\n\n\nSee here for the document type\ndefinitions.",
"passage: ### Data Splits\n\n\n\nImportant Note:\n\n\n* 'mixed\\_test' contains 20 abstracts on the following diseases: glaucoma, neoplasm, diabetes, hypertension, hepatitis.\n* 31 out of 40 abstracts in 'mixed\\_test' overlap with abstracts in 'neoplasm\\_test' and 'glaucoma\\_test'.### Label Descriptions\n\n\nIn this section, we describe labels according to Mayer et al. (2020), as well as our label counts on 669 abstracts.\n\n\nUnfortunately, the number we report does not correspond to what Mayer et al. reported in their paper (see Table 1, p. 2109).\nMorio et al. (2022; p. 642, Table 1), who utilized this corpus for their AM tasks, also reported another number, claiming there were double annotation errors in the original statistic collection (see reference).#### Components\n\n\n\n* 'MajorClaim' are more general/concluding 'claim''s, which is supported by more specific claims\n* 'Claim' is a concluding statement made by the author about the outcome of the study. Claims only points to other claims.\n* 'Premise' (a.k.a. evidence) is an observation or measurement in the study, which supports or attacks another argument component, usually a 'claim'. They are observed facts, and therefore credible without further justifications, as this is the ground truth the argumentation is based on.\n\n\n(Mayer et al. 2020, p.2110)#### Relations\n\n\n\n* 'Support': All statements or observations justifying the proposition of the target component\n* 'Partial-Attack': when the source component is not in full contradiction, but weakening the target component by constraining its proposition. Usually occur between two claims\n* 'Attack': A component is attacking another one, if it is\n\t+ i) contradicting the proposition of the target component, or\n\t+ ii) undercutting its implicit assumption of significance constraints\n* 'Premise' can only be connected to either 'Claim' or another 'Premise'\n* 'Claim''s can only point to other 'Claim''s\n* There might be more than one outgoing and/or incoming relation . In rare case, there is no relation to another component at all.\n\n\n(Mayer et al. 2020, p.2110)\n\n\nDataset Creation\n----------------",
"passage: ### Curation Rationale\n\n\n\"[D]espite its natural employment in healthcare applications, only few approaches have applied AM methods to this kind\nof text, and their contribution is limited to the detection\nof argument components, disregarding the more complex phase of\npredicting the relations among them. In addition, no huge annotated\ndataset for AM is available for the healthcare domain (p. 2108)...to support clinicians in decision making or in (semi)-automatically\nfilling evidence tables for systematic reviews in evidence-based medicine. (p. 2114)\"### Source Data\n\n\nMEDLINE database#### Initial Data Collection and Normalization\n\n\nExtended from the previous dataset in Mayer et al. 2018, 500 medical abstract from randomized controlled trials (RCTs) were retrieved directly from PubMed by searching for titles or abstracts containing the disease name.\n\n\n(See the definition of RCT in the authors' guideline (Section 1.2) and US National Library of Medicine)#### Who are the source language producers?\n\n\n\\]### Annotations#### Annotation process\n\n\n\"An expert in the medical domain (a pharmacist) validated the annotation\nguidelines before starting the annotation process.\" (p. 2110)\n\n\n\"Annotation was started after a training phase, where amongst others the component boundaries were topic of discussion. Gold labels\nwere set after a reconciliation phase, during which the annotators\ntried to reach an agreement. While the number of annotators vary for\nthe two annotation phases (component and relation annotation).\n\n\nOn the annotation of argument components, \"IAA among the three annotators has been calculated\non 30 abstracts, resulting in a Fleiss’ kappa of 0.72 for argumentative\ncomponents and 0.68 for the more fine-grained distinction between\nclaims and evidence.\" (p. 2109)\n\n\nOn the annotation of argumentative relation, \"IAA has been calculated on 30 abstracts annotated in parallel by three annotators,\nresulting in a Fleiss’ kappa of\n0.62. The annotation of the remaining abstracts was carried out by\none of the above mentioned annotators.\" (p. 2110)\n\n\nSee the Annotation Guideline for more information on definitions and annotated samples.#### Who are the annotators?\n\n\nTwo annotators with background in computational linguistics. No information was given on the third annotator.### Personal and Sensitive Information\n\n\n\\]\n\n\nConsiderations for Using the Data\n---------------------------------"
] |
0e388c9c6b29ce62eb6a166f83dd76229ccbd296 |
# Dataset Card for Evaluation run of MexIvanov/zephyr-python-ru-merged
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [MexIvanov/zephyr-python-ru-merged](https://huggingface.co/MexIvanov/zephyr-python-ru-merged) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_MexIvanov__zephyr-python-ru-merged",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-22T01:34:26.452654](https://huggingface.co/datasets/open-llm-leaderboard/details_MexIvanov__zephyr-python-ru-merged/blob/main/results_2023-12-22T01-34-26.452654.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5993966446508577,
"acc_stderr": 0.0330766584101115,
"acc_norm": 0.6050500523708532,
"acc_norm_stderr": 0.033760089456490616,
"mc1": 0.379436964504284,
"mc1_stderr": 0.01698703926614298,
"mc2": 0.5280717894644429,
"mc2_stderr": 0.015316530809563272
},
"harness|arc:challenge|25": {
"acc": 0.5290102389078498,
"acc_stderr": 0.014586776355294314,
"acc_norm": 0.560580204778157,
"acc_norm_stderr": 0.014503747823580122
},
"harness|hellaswag|10": {
"acc": 0.622585142401912,
"acc_stderr": 0.004837493439874301,
"acc_norm": 0.8205536745668194,
"acc_norm_stderr": 0.003829413805113985
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.29,
"acc_stderr": 0.04560480215720684,
"acc_norm": 0.29,
"acc_norm_stderr": 0.04560480215720684
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.5777777777777777,
"acc_stderr": 0.04266763404099582,
"acc_norm": 0.5777777777777777,
"acc_norm_stderr": 0.04266763404099582
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5986842105263158,
"acc_stderr": 0.03988903703336284,
"acc_norm": 0.5986842105263158,
"acc_norm_stderr": 0.03988903703336284
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.49,
"acc_stderr": 0.05024183937956912,
"acc_norm": 0.49,
"acc_norm_stderr": 0.05024183937956912
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6716981132075471,
"acc_stderr": 0.02890159361241178,
"acc_norm": 0.6716981132075471,
"acc_norm_stderr": 0.02890159361241178
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.6875,
"acc_stderr": 0.038760854559127644,
"acc_norm": 0.6875,
"acc_norm_stderr": 0.038760854559127644
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.47,
"acc_stderr": 0.050161355804659205,
"acc_norm": 0.47,
"acc_norm_stderr": 0.050161355804659205
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.47,
"acc_stderr": 0.05016135580465919,
"acc_norm": 0.47,
"acc_norm_stderr": 0.05016135580465919
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.38,
"acc_stderr": 0.04878317312145633,
"acc_norm": 0.38,
"acc_norm_stderr": 0.04878317312145633
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.6127167630057804,
"acc_stderr": 0.03714325906302065,
"acc_norm": 0.6127167630057804,
"acc_norm_stderr": 0.03714325906302065
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.39215686274509803,
"acc_stderr": 0.048580835742663454,
"acc_norm": 0.39215686274509803,
"acc_norm_stderr": 0.048580835742663454
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.72,
"acc_stderr": 0.04512608598542128,
"acc_norm": 0.72,
"acc_norm_stderr": 0.04512608598542128
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.5276595744680851,
"acc_stderr": 0.03263597118409769,
"acc_norm": 0.5276595744680851,
"acc_norm_stderr": 0.03263597118409769
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.37719298245614036,
"acc_stderr": 0.04559522141958216,
"acc_norm": 0.37719298245614036,
"acc_norm_stderr": 0.04559522141958216
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.5517241379310345,
"acc_stderr": 0.04144311810878152,
"acc_norm": 0.5517241379310345,
"acc_norm_stderr": 0.04144311810878152
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.35978835978835977,
"acc_stderr": 0.024718075944129277,
"acc_norm": 0.35978835978835977,
"acc_norm_stderr": 0.024718075944129277
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3968253968253968,
"acc_stderr": 0.043758884927270605,
"acc_norm": 0.3968253968253968,
"acc_norm_stderr": 0.043758884927270605
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.37,
"acc_stderr": 0.04852365870939099,
"acc_norm": 0.37,
"acc_norm_stderr": 0.04852365870939099
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.7258064516129032,
"acc_stderr": 0.0253781399708852,
"acc_norm": 0.7258064516129032,
"acc_norm_stderr": 0.0253781399708852
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.5369458128078818,
"acc_stderr": 0.035083705204426656,
"acc_norm": 0.5369458128078818,
"acc_norm_stderr": 0.035083705204426656
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.62,
"acc_stderr": 0.04878317312145632,
"acc_norm": 0.62,
"acc_norm_stderr": 0.04878317312145632
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.7515151515151515,
"acc_stderr": 0.033744026441394036,
"acc_norm": 0.7515151515151515,
"acc_norm_stderr": 0.033744026441394036
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.7474747474747475,
"acc_stderr": 0.030954055470365886,
"acc_norm": 0.7474747474747475,
"acc_norm_stderr": 0.030954055470365886
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8238341968911918,
"acc_stderr": 0.027493504244548057,
"acc_norm": 0.8238341968911918,
"acc_norm_stderr": 0.027493504244548057
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.6128205128205129,
"acc_stderr": 0.02469721693087894,
"acc_norm": 0.6128205128205129,
"acc_norm_stderr": 0.02469721693087894
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.34074074074074073,
"acc_stderr": 0.02889774874113115,
"acc_norm": 0.34074074074074073,
"acc_norm_stderr": 0.02889774874113115
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6722689075630253,
"acc_stderr": 0.03048991141767323,
"acc_norm": 0.6722689075630253,
"acc_norm_stderr": 0.03048991141767323
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.3576158940397351,
"acc_stderr": 0.03913453431177258,
"acc_norm": 0.3576158940397351,
"acc_norm_stderr": 0.03913453431177258
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.8,
"acc_stderr": 0.01714985851425095,
"acc_norm": 0.8,
"acc_norm_stderr": 0.01714985851425095
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.5231481481481481,
"acc_stderr": 0.034063153607115086,
"acc_norm": 0.5231481481481481,
"acc_norm_stderr": 0.034063153607115086
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7647058823529411,
"acc_stderr": 0.029771775228145635,
"acc_norm": 0.7647058823529411,
"acc_norm_stderr": 0.029771775228145635
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7257383966244726,
"acc_stderr": 0.029041333510598025,
"acc_norm": 0.7257383966244726,
"acc_norm_stderr": 0.029041333510598025
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6233183856502242,
"acc_stderr": 0.032521134899291884,
"acc_norm": 0.6233183856502242,
"acc_norm_stderr": 0.032521134899291884
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.7480916030534351,
"acc_stderr": 0.03807387116306086,
"acc_norm": 0.7480916030534351,
"acc_norm_stderr": 0.03807387116306086
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.7355371900826446,
"acc_stderr": 0.04026187527591205,
"acc_norm": 0.7355371900826446,
"acc_norm_stderr": 0.04026187527591205
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.75,
"acc_stderr": 0.04186091791394607,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04186091791394607
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.7484662576687117,
"acc_stderr": 0.03408997886857529,
"acc_norm": 0.7484662576687117,
"acc_norm_stderr": 0.03408997886857529
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.3482142857142857,
"acc_stderr": 0.045218299028335865,
"acc_norm": 0.3482142857142857,
"acc_norm_stderr": 0.045218299028335865
},
"harness|hendrycksTest-management|5": {
"acc": 0.7669902912621359,
"acc_stderr": 0.04185832598928315,
"acc_norm": 0.7669902912621359,
"acc_norm_stderr": 0.04185832598928315
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8675213675213675,
"acc_stderr": 0.022209309073165616,
"acc_norm": 0.8675213675213675,
"acc_norm_stderr": 0.022209309073165616
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.71,
"acc_stderr": 0.045604802157206845,
"acc_norm": 0.71,
"acc_norm_stderr": 0.045604802157206845
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.789272030651341,
"acc_stderr": 0.014583812465862541,
"acc_norm": 0.789272030651341,
"acc_norm_stderr": 0.014583812465862541
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.6734104046242775,
"acc_stderr": 0.025248264774242832,
"acc_norm": 0.6734104046242775,
"acc_norm_stderr": 0.025248264774242832
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.17094972067039105,
"acc_stderr": 0.012590873868789234,
"acc_norm": 0.17094972067039105,
"acc_norm_stderr": 0.012590873868789234
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6633986928104575,
"acc_stderr": 0.02705797462449438,
"acc_norm": 0.6633986928104575,
"acc_norm_stderr": 0.02705797462449438
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6977491961414791,
"acc_stderr": 0.026082700695399662,
"acc_norm": 0.6977491961414791,
"acc_norm_stderr": 0.026082700695399662
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6728395061728395,
"acc_stderr": 0.026105673861409828,
"acc_norm": 0.6728395061728395,
"acc_norm_stderr": 0.026105673861409828
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.44680851063829785,
"acc_stderr": 0.02965823509766691,
"acc_norm": 0.44680851063829785,
"acc_norm_stderr": 0.02965823509766691
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.42633637548891784,
"acc_stderr": 0.012630884771599698,
"acc_norm": 0.42633637548891784,
"acc_norm_stderr": 0.012630884771599698
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.6286764705882353,
"acc_stderr": 0.02934980313976587,
"acc_norm": 0.6286764705882353,
"acc_norm_stderr": 0.02934980313976587
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.6176470588235294,
"acc_stderr": 0.019659922493623343,
"acc_norm": 0.6176470588235294,
"acc_norm_stderr": 0.019659922493623343
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6545454545454545,
"acc_stderr": 0.04554619617541054,
"acc_norm": 0.6545454545454545,
"acc_norm_stderr": 0.04554619617541054
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6653061224489796,
"acc_stderr": 0.030209235226242307,
"acc_norm": 0.6653061224489796,
"acc_norm_stderr": 0.030209235226242307
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.8208955223880597,
"acc_stderr": 0.027113286753111837,
"acc_norm": 0.8208955223880597,
"acc_norm_stderr": 0.027113286753111837
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.79,
"acc_stderr": 0.040936018074033256,
"acc_norm": 0.79,
"acc_norm_stderr": 0.040936018074033256
},
"harness|hendrycksTest-virology|5": {
"acc": 0.5240963855421686,
"acc_stderr": 0.03887971849597264,
"acc_norm": 0.5240963855421686,
"acc_norm_stderr": 0.03887971849597264
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.8304093567251462,
"acc_stderr": 0.02878210810540171,
"acc_norm": 0.8304093567251462,
"acc_norm_stderr": 0.02878210810540171
},
"harness|truthfulqa:mc|0": {
"mc1": 0.379436964504284,
"mc1_stderr": 0.01698703926614298,
"mc2": 0.5280717894644429,
"mc2_stderr": 0.015316530809563272
},
"harness|winogrande|5": {
"acc": 0.7695343330702447,
"acc_stderr": 0.011835872164836671
},
"harness|gsm8k|5": {
"acc": 0.3252463987869598,
"acc_stderr": 0.01290390475254392
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_MexIvanov__zephyr-python-ru-merged | [
"region:us"
] | 2023-12-22T01:36:43+00:00 | {"pretty_name": "Evaluation run of MexIvanov/zephyr-python-ru-merged", "dataset_summary": "Dataset automatically created during the evaluation run of model [MexIvanov/zephyr-python-ru-merged](https://huggingface.co/MexIvanov/zephyr-python-ru-merged) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_MexIvanov__zephyr-python-ru-merged\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-22T01:34:26.452654](https://huggingface.co/datasets/open-llm-leaderboard/details_MexIvanov__zephyr-python-ru-merged/blob/main/results_2023-12-22T01-34-26.452654.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5993966446508577,\n \"acc_stderr\": 0.0330766584101115,\n \"acc_norm\": 0.6050500523708532,\n \"acc_norm_stderr\": 0.033760089456490616,\n \"mc1\": 0.379436964504284,\n \"mc1_stderr\": 0.01698703926614298,\n \"mc2\": 0.5280717894644429,\n \"mc2_stderr\": 0.015316530809563272\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.5290102389078498,\n \"acc_stderr\": 0.014586776355294314,\n \"acc_norm\": 0.560580204778157,\n \"acc_norm_stderr\": 0.014503747823580122\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.622585142401912,\n \"acc_stderr\": 0.004837493439874301,\n \"acc_norm\": 0.8205536745668194,\n \"acc_norm_stderr\": 0.003829413805113985\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.29,\n \"acc_stderr\": 0.04560480215720684,\n \"acc_norm\": 0.29,\n \"acc_norm_stderr\": 0.04560480215720684\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.5777777777777777,\n \"acc_stderr\": 0.04266763404099582,\n \"acc_norm\": 0.5777777777777777,\n \"acc_norm_stderr\": 0.04266763404099582\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.5986842105263158,\n \"acc_stderr\": 0.03988903703336284,\n \"acc_norm\": 0.5986842105263158,\n \"acc_norm_stderr\": 0.03988903703336284\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.49,\n \"acc_stderr\": 0.05024183937956912,\n \"acc_norm\": 0.49,\n \"acc_norm_stderr\": 0.05024183937956912\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6716981132075471,\n \"acc_stderr\": 0.02890159361241178,\n \"acc_norm\": 0.6716981132075471,\n \"acc_norm_stderr\": 0.02890159361241178\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.6875,\n \"acc_stderr\": 0.038760854559127644,\n \"acc_norm\": 0.6875,\n \"acc_norm_stderr\": 0.038760854559127644\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.050161355804659205,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.050161355804659205\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.47,\n \"acc_stderr\": 0.05016135580465919,\n \"acc_norm\": 0.47,\n \"acc_norm_stderr\": 0.05016135580465919\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.38,\n \"acc_stderr\": 0.04878317312145633,\n \"acc_norm\": 0.38,\n \"acc_norm_stderr\": 0.04878317312145633\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.6127167630057804,\n \"acc_stderr\": 0.03714325906302065,\n \"acc_norm\": 0.6127167630057804,\n \"acc_norm_stderr\": 0.03714325906302065\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.39215686274509803,\n \"acc_stderr\": 0.048580835742663454,\n \"acc_norm\": 0.39215686274509803,\n \"acc_norm_stderr\": 0.048580835742663454\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.72,\n \"acc_stderr\": 0.04512608598542128,\n \"acc_norm\": 0.72,\n \"acc_norm_stderr\": 0.04512608598542128\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.5276595744680851,\n \"acc_stderr\": 0.03263597118409769,\n \"acc_norm\": 0.5276595744680851,\n \"acc_norm_stderr\": 0.03263597118409769\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.37719298245614036,\n \"acc_stderr\": 0.04559522141958216,\n \"acc_norm\": 0.37719298245614036,\n \"acc_norm_stderr\": 0.04559522141958216\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.5517241379310345,\n \"acc_stderr\": 0.04144311810878152,\n \"acc_norm\": 0.5517241379310345,\n \"acc_norm_stderr\": 0.04144311810878152\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.35978835978835977,\n \"acc_stderr\": 0.024718075944129277,\n \"acc_norm\": 0.35978835978835977,\n \"acc_norm_stderr\": 0.024718075944129277\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3968253968253968,\n \"acc_stderr\": 0.043758884927270605,\n \"acc_norm\": 0.3968253968253968,\n \"acc_norm_stderr\": 0.043758884927270605\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.37,\n \"acc_stderr\": 0.04852365870939099,\n \"acc_norm\": 0.37,\n \"acc_norm_stderr\": 0.04852365870939099\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.7258064516129032,\n \"acc_stderr\": 0.0253781399708852,\n \"acc_norm\": 0.7258064516129032,\n \"acc_norm_stderr\": 0.0253781399708852\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.5369458128078818,\n \"acc_stderr\": 0.035083705204426656,\n \"acc_norm\": 0.5369458128078818,\n \"acc_norm_stderr\": 0.035083705204426656\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.62,\n \"acc_stderr\": 0.04878317312145632,\n \"acc_norm\": 0.62,\n \"acc_norm_stderr\": 0.04878317312145632\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.7515151515151515,\n \"acc_stderr\": 0.033744026441394036,\n \"acc_norm\": 0.7515151515151515,\n \"acc_norm_stderr\": 0.033744026441394036\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.7474747474747475,\n \"acc_stderr\": 0.030954055470365886,\n \"acc_norm\": 0.7474747474747475,\n \"acc_norm_stderr\": 0.030954055470365886\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8238341968911918,\n \"acc_stderr\": 0.027493504244548057,\n \"acc_norm\": 0.8238341968911918,\n \"acc_norm_stderr\": 0.027493504244548057\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.6128205128205129,\n \"acc_stderr\": 0.02469721693087894,\n \"acc_norm\": 0.6128205128205129,\n \"acc_norm_stderr\": 0.02469721693087894\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.34074074074074073,\n \"acc_stderr\": 0.02889774874113115,\n \"acc_norm\": 0.34074074074074073,\n \"acc_norm_stderr\": 0.02889774874113115\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6722689075630253,\n \"acc_stderr\": 0.03048991141767323,\n \"acc_norm\": 0.6722689075630253,\n \"acc_norm_stderr\": 0.03048991141767323\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.3576158940397351,\n \"acc_stderr\": 0.03913453431177258,\n \"acc_norm\": 0.3576158940397351,\n \"acc_norm_stderr\": 0.03913453431177258\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.8,\n \"acc_stderr\": 0.01714985851425095,\n \"acc_norm\": 0.8,\n \"acc_norm_stderr\": 0.01714985851425095\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.5231481481481481,\n \"acc_stderr\": 0.034063153607115086,\n \"acc_norm\": 0.5231481481481481,\n \"acc_norm_stderr\": 0.034063153607115086\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7647058823529411,\n \"acc_stderr\": 0.029771775228145635,\n \"acc_norm\": 0.7647058823529411,\n \"acc_norm_stderr\": 0.029771775228145635\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7257383966244726,\n \"acc_stderr\": 0.029041333510598025,\n \"acc_norm\": 0.7257383966244726,\n \"acc_norm_stderr\": 0.029041333510598025\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6233183856502242,\n \"acc_stderr\": 0.032521134899291884,\n \"acc_norm\": 0.6233183856502242,\n \"acc_norm_stderr\": 0.032521134899291884\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.7480916030534351,\n \"acc_stderr\": 0.03807387116306086,\n \"acc_norm\": 0.7480916030534351,\n \"acc_norm_stderr\": 0.03807387116306086\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.7355371900826446,\n \"acc_stderr\": 0.04026187527591205,\n \"acc_norm\": 0.7355371900826446,\n \"acc_norm_stderr\": 0.04026187527591205\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04186091791394607,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04186091791394607\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.7484662576687117,\n \"acc_stderr\": 0.03408997886857529,\n \"acc_norm\": 0.7484662576687117,\n \"acc_norm_stderr\": 0.03408997886857529\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.3482142857142857,\n \"acc_stderr\": 0.045218299028335865,\n \"acc_norm\": 0.3482142857142857,\n \"acc_norm_stderr\": 0.045218299028335865\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7669902912621359,\n \"acc_stderr\": 0.04185832598928315,\n \"acc_norm\": 0.7669902912621359,\n \"acc_norm_stderr\": 0.04185832598928315\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8675213675213675,\n \"acc_stderr\": 0.022209309073165616,\n \"acc_norm\": 0.8675213675213675,\n \"acc_norm_stderr\": 0.022209309073165616\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.71,\n \"acc_stderr\": 0.045604802157206845,\n \"acc_norm\": 0.71,\n \"acc_norm_stderr\": 0.045604802157206845\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.789272030651341,\n \"acc_stderr\": 0.014583812465862541,\n \"acc_norm\": 0.789272030651341,\n \"acc_norm_stderr\": 0.014583812465862541\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.6734104046242775,\n \"acc_stderr\": 0.025248264774242832,\n \"acc_norm\": 0.6734104046242775,\n \"acc_norm_stderr\": 0.025248264774242832\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.17094972067039105,\n \"acc_stderr\": 0.012590873868789234,\n \"acc_norm\": 0.17094972067039105,\n \"acc_norm_stderr\": 0.012590873868789234\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6633986928104575,\n \"acc_stderr\": 0.02705797462449438,\n \"acc_norm\": 0.6633986928104575,\n \"acc_norm_stderr\": 0.02705797462449438\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6977491961414791,\n \"acc_stderr\": 0.026082700695399662,\n \"acc_norm\": 0.6977491961414791,\n \"acc_norm_stderr\": 0.026082700695399662\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6728395061728395,\n \"acc_stderr\": 0.026105673861409828,\n \"acc_norm\": 0.6728395061728395,\n \"acc_norm_stderr\": 0.026105673861409828\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.44680851063829785,\n \"acc_stderr\": 0.02965823509766691,\n \"acc_norm\": 0.44680851063829785,\n \"acc_norm_stderr\": 0.02965823509766691\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.42633637548891784,\n \"acc_stderr\": 0.012630884771599698,\n \"acc_norm\": 0.42633637548891784,\n \"acc_norm_stderr\": 0.012630884771599698\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.6286764705882353,\n \"acc_stderr\": 0.02934980313976587,\n \"acc_norm\": 0.6286764705882353,\n \"acc_norm_stderr\": 0.02934980313976587\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.6176470588235294,\n \"acc_stderr\": 0.019659922493623343,\n \"acc_norm\": 0.6176470588235294,\n \"acc_norm_stderr\": 0.019659922493623343\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6545454545454545,\n \"acc_stderr\": 0.04554619617541054,\n \"acc_norm\": 0.6545454545454545,\n \"acc_norm_stderr\": 0.04554619617541054\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6653061224489796,\n \"acc_stderr\": 0.030209235226242307,\n \"acc_norm\": 0.6653061224489796,\n \"acc_norm_stderr\": 0.030209235226242307\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.8208955223880597,\n \"acc_stderr\": 0.027113286753111837,\n \"acc_norm\": 0.8208955223880597,\n \"acc_norm_stderr\": 0.027113286753111837\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.79,\n \"acc_stderr\": 0.040936018074033256,\n \"acc_norm\": 0.79,\n \"acc_norm_stderr\": 0.040936018074033256\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.5240963855421686,\n \"acc_stderr\": 0.03887971849597264,\n \"acc_norm\": 0.5240963855421686,\n \"acc_norm_stderr\": 0.03887971849597264\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.8304093567251462,\n \"acc_stderr\": 0.02878210810540171,\n \"acc_norm\": 0.8304093567251462,\n \"acc_norm_stderr\": 0.02878210810540171\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.379436964504284,\n \"mc1_stderr\": 0.01698703926614298,\n \"mc2\": 0.5280717894644429,\n \"mc2_stderr\": 0.015316530809563272\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7695343330702447,\n \"acc_stderr\": 0.011835872164836671\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.3252463987869598,\n \"acc_stderr\": 0.01290390475254392\n }\n}\n```", "repo_url": "https://huggingface.co/MexIvanov/zephyr-python-ru-merged", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|arc:challenge|25_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|gsm8k|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hellaswag|10_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-22T01-34-26.452654.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["**/details_harness|winogrande|5_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-22T01-34-26.452654.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_22T01_34_26.452654", "path": ["results_2023-12-22T01-34-26.452654.parquet"]}, {"split": "latest", "path": ["results_2023-12-22T01-34-26.452654.parquet"]}]}]} | 2023-12-22T01:37:10+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of MexIvanov/zephyr-python-ru-merged
Dataset automatically created during the evaluation run of model MexIvanov/zephyr-python-ru-merged on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-22T01:34:26.452654(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of MexIvanov/zephyr-python-ru-merged\n\n\n\nDataset automatically created during the evaluation run of model MexIvanov/zephyr-python-ru-merged on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-22T01:34:26.452654(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of MexIvanov/zephyr-python-ru-merged\n\n\n\nDataset automatically created during the evaluation run of model MexIvanov/zephyr-python-ru-merged on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-22T01:34:26.452654(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
191,
67,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of MexIvanov/zephyr-python-ru-merged\n\n\n\nDataset automatically created during the evaluation run of model MexIvanov/zephyr-python-ru-merged on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-22T01:34:26.452654(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]"
] |
5e76172e4ff7c883fe60fb76d19966987f4ccb9d | # Dataset Card for Only Clean Data (OCD)
If you are training base language models and want the cleanest sources available, OCD was built just for you.
## Dataset Details
### Dataset Description
It is without question that the quality of a language model rests on the quality of its training data. OCD is a meticulously curated and cleaned corpus of text documents, ensuring the highest quality text from a variety of sources. Part of this process includes manually inspecting (and sometimes manually fixing) thousands of documents.
Whenever problem documents are found (from e.g. conversion errors, or spam that got through), they are fixed for the next release.
### Dataset Sources
OCD currently consists of 3 subsets:
- Web data originating from [C4](https://huggingface.co/datasets/c4). This subset was heavily filtered to remove a lot of spam, templates, and other low quality data. It consists of approximately 18M documents, which is roughly 5% of the original 365M documents.
- Peer reviewed research originating from [PLOS](https://plos.org/text-and-data-mining/). Documents were normalized to markdown from the original JATS XML, and were processed to remove captions and references to missing figures. A large number of these documents were manually inspected to remove irrelevant files (e.g. journal announcements, letters to the editor, and short comments).
- Non-fiction books from [CCOpenBooks](https://huggingface.co/datasets/Daniel-P-Gonzalez/CCOpenBooks). This will be expanded soon, as the original collection was quite small. However, it contains only high quality text books, all of which have non-restrictive cc-by compatible licenses.
This dataset is actively in development, and will continue to be extended to include books, research, and other documents from various domains, code, documentation, and more.
## License
OCD is released under the cc-by-4.0 license. Note that this is directly compatible with [PLOS](https://plos.org/terms-of-use/) and CCOpenBooks ([cc-by-4.0](https://huggingface.co/datasets/Daniel-P-Gonzalez/CCOpenBooks)).
Documents originating from C4 were released under the ODC-BY license as noted [here](https://huggingface.co/datasets/c4#licensing-information). As this subset was derived from an enormous Common Crawl corpus, there is a possibility that it contains documents not compatible with this license. However, the heavy filtering applied as part of the OCD project greatly reduces this possibility. Additionally, any opt-out requests from content authors will be respected.
## Uses
The primary intended use of this dataset is for training base language models.
| Daniel-P-Gonzalez/OCD | [
"task_categories:text-generation",
"size_categories:10B<n<100B",
"license:cc-by-4.0",
"region:us"
] | 2023-12-22T02:44:51+00:00 | {"license": "cc-by-4.0", "size_categories": ["10B<n<100B"], "task_categories": ["text-generation"], "pretty_name": "Only Clean Data (OCD)"} | 2023-12-23T16:49:10+00:00 | [] | [] | TAGS
#task_categories-text-generation #size_categories-10B<n<100B #license-cc-by-4.0 #region-us
| # Dataset Card for Only Clean Data (OCD)
If you are training base language models and want the cleanest sources available, OCD was built just for you.
## Dataset Details
### Dataset Description
It is without question that the quality of a language model rests on the quality of its training data. OCD is a meticulously curated and cleaned corpus of text documents, ensuring the highest quality text from a variety of sources. Part of this process includes manually inspecting (and sometimes manually fixing) thousands of documents.
Whenever problem documents are found (from e.g. conversion errors, or spam that got through), they are fixed for the next release.
### Dataset Sources
OCD currently consists of 3 subsets:
- Web data originating from C4. This subset was heavily filtered to remove a lot of spam, templates, and other low quality data. It consists of approximately 18M documents, which is roughly 5% of the original 365M documents.
- Peer reviewed research originating from PLOS. Documents were normalized to markdown from the original JATS XML, and were processed to remove captions and references to missing figures. A large number of these documents were manually inspected to remove irrelevant files (e.g. journal announcements, letters to the editor, and short comments).
- Non-fiction books from CCOpenBooks. This will be expanded soon, as the original collection was quite small. However, it contains only high quality text books, all of which have non-restrictive cc-by compatible licenses.
This dataset is actively in development, and will continue to be extended to include books, research, and other documents from various domains, code, documentation, and more.
## License
OCD is released under the cc-by-4.0 license. Note that this is directly compatible with PLOS and CCOpenBooks (cc-by-4.0).
Documents originating from C4 were released under the ODC-BY license as noted here. As this subset was derived from an enormous Common Crawl corpus, there is a possibility that it contains documents not compatible with this license. However, the heavy filtering applied as part of the OCD project greatly reduces this possibility. Additionally, any opt-out requests from content authors will be respected.
## Uses
The primary intended use of this dataset is for training base language models.
| [
"# Dataset Card for Only Clean Data (OCD)\n\nIf you are training base language models and want the cleanest sources available, OCD was built just for you.",
"## Dataset Details",
"### Dataset Description\n\nIt is without question that the quality of a language model rests on the quality of its training data. OCD is a meticulously curated and cleaned corpus of text documents, ensuring the highest quality text from a variety of sources. Part of this process includes manually inspecting (and sometimes manually fixing) thousands of documents.\nWhenever problem documents are found (from e.g. conversion errors, or spam that got through), they are fixed for the next release.",
"### Dataset Sources\n\nOCD currently consists of 3 subsets:\n\n- Web data originating from C4. This subset was heavily filtered to remove a lot of spam, templates, and other low quality data. It consists of approximately 18M documents, which is roughly 5% of the original 365M documents.\n- Peer reviewed research originating from PLOS. Documents were normalized to markdown from the original JATS XML, and were processed to remove captions and references to missing figures. A large number of these documents were manually inspected to remove irrelevant files (e.g. journal announcements, letters to the editor, and short comments).\n- Non-fiction books from CCOpenBooks. This will be expanded soon, as the original collection was quite small. However, it contains only high quality text books, all of which have non-restrictive cc-by compatible licenses.\n\nThis dataset is actively in development, and will continue to be extended to include books, research, and other documents from various domains, code, documentation, and more.",
"## License\n\nOCD is released under the cc-by-4.0 license. Note that this is directly compatible with PLOS and CCOpenBooks (cc-by-4.0).\n\nDocuments originating from C4 were released under the ODC-BY license as noted here. As this subset was derived from an enormous Common Crawl corpus, there is a possibility that it contains documents not compatible with this license. However, the heavy filtering applied as part of the OCD project greatly reduces this possibility. Additionally, any opt-out requests from content authors will be respected.",
"## Uses\n\nThe primary intended use of this dataset is for training base language models."
] | [
"TAGS\n#task_categories-text-generation #size_categories-10B<n<100B #license-cc-by-4.0 #region-us \n",
"# Dataset Card for Only Clean Data (OCD)\n\nIf you are training base language models and want the cleanest sources available, OCD was built just for you.",
"## Dataset Details",
"### Dataset Description\n\nIt is without question that the quality of a language model rests on the quality of its training data. OCD is a meticulously curated and cleaned corpus of text documents, ensuring the highest quality text from a variety of sources. Part of this process includes manually inspecting (and sometimes manually fixing) thousands of documents.\nWhenever problem documents are found (from e.g. conversion errors, or spam that got through), they are fixed for the next release.",
"### Dataset Sources\n\nOCD currently consists of 3 subsets:\n\n- Web data originating from C4. This subset was heavily filtered to remove a lot of spam, templates, and other low quality data. It consists of approximately 18M documents, which is roughly 5% of the original 365M documents.\n- Peer reviewed research originating from PLOS. Documents were normalized to markdown from the original JATS XML, and were processed to remove captions and references to missing figures. A large number of these documents were manually inspected to remove irrelevant files (e.g. journal announcements, letters to the editor, and short comments).\n- Non-fiction books from CCOpenBooks. This will be expanded soon, as the original collection was quite small. However, it contains only high quality text books, all of which have non-restrictive cc-by compatible licenses.\n\nThis dataset is actively in development, and will continue to be extended to include books, research, and other documents from various domains, code, documentation, and more.",
"## License\n\nOCD is released under the cc-by-4.0 license. Note that this is directly compatible with PLOS and CCOpenBooks (cc-by-4.0).\n\nDocuments originating from C4 were released under the ODC-BY license as noted here. As this subset was derived from an enormous Common Crawl corpus, there is a possibility that it contains documents not compatible with this license. However, the heavy filtering applied as part of the OCD project greatly reduces this possibility. Additionally, any opt-out requests from content authors will be respected.",
"## Uses\n\nThe primary intended use of this dataset is for training base language models."
] | [
38,
35,
4,
107,
242,
130,
18
] | [
"passage: TAGS\n#task_categories-text-generation #size_categories-10B<n<100B #license-cc-by-4.0 #region-us \n# Dataset Card for Only Clean Data (OCD)\n\nIf you are training base language models and want the cleanest sources available, OCD was built just for you.## Dataset Details### Dataset Description\n\nIt is without question that the quality of a language model rests on the quality of its training data. OCD is a meticulously curated and cleaned corpus of text documents, ensuring the highest quality text from a variety of sources. Part of this process includes manually inspecting (and sometimes manually fixing) thousands of documents.\nWhenever problem documents are found (from e.g. conversion errors, or spam that got through), they are fixed for the next release.### Dataset Sources\n\nOCD currently consists of 3 subsets:\n\n- Web data originating from C4. This subset was heavily filtered to remove a lot of spam, templates, and other low quality data. It consists of approximately 18M documents, which is roughly 5% of the original 365M documents.\n- Peer reviewed research originating from PLOS. Documents were normalized to markdown from the original JATS XML, and were processed to remove captions and references to missing figures. A large number of these documents were manually inspected to remove irrelevant files (e.g. journal announcements, letters to the editor, and short comments).\n- Non-fiction books from CCOpenBooks. This will be expanded soon, as the original collection was quite small. However, it contains only high quality text books, all of which have non-restrictive cc-by compatible licenses.\n\nThis dataset is actively in development, and will continue to be extended to include books, research, and other documents from various domains, code, documentation, and more."
] |
7019d953c2e46ad126efa8794bc61e82a43b5300 |
## Description
Very cool ai channel for funny stuff
## Model
SVD
## LoRA
veryVANYA/ps1-graphics-sdxl-v2
## Tags
- Music
- Gaming
- Comedy
## Voice
Chill funny male voice
## Music
funny music
## Prompt
Make a funny video (will be used by AI only) | DddDistorted/ai-tube-DddDistorted | [
"license:cc-by-nc-sa-4.0",
"region:us"
] | 2023-12-22T02:45:09+00:00 | {"license": "cc-by-nc-sa-4.0", "pretty_name": "DddDistorted"} | 2023-12-22T02:50:21+00:00 | [] | [] | TAGS
#license-cc-by-nc-sa-4.0 #region-us
|
## Description
Very cool ai channel for funny stuff
## Model
SVD
## LoRA
veryVANYA/ps1-graphics-sdxl-v2
## Tags
- Music
- Gaming
- Comedy
## Voice
Chill funny male voice
## Music
funny music
## Prompt
Make a funny video (will be used by AI only) | [
"## Description\n\nVery cool ai channel for funny stuff",
"## Model\n\nSVD",
"## LoRA\n\nveryVANYA/ps1-graphics-sdxl-v2",
"## Tags\n\n- Music\n- Gaming\n- Comedy",
"## Voice\n\nChill funny male voice",
"## Music\n\nfunny music",
"## Prompt\n\nMake a funny video (will be used by AI only)"
] | [
"TAGS\n#license-cc-by-nc-sa-4.0 #region-us \n",
"## Description\n\nVery cool ai channel for funny stuff",
"## Model\n\nSVD",
"## LoRA\n\nveryVANYA/ps1-graphics-sdxl-v2",
"## Tags\n\n- Music\n- Gaming\n- Comedy",
"## Voice\n\nChill funny male voice",
"## Music\n\nfunny music",
"## Prompt\n\nMake a funny video (will be used by AI only)"
] | [
19,
9,
4,
18,
8,
6,
4,
16
] | [
"passage: TAGS\n#license-cc-by-nc-sa-4.0 #region-us \n## Description\n\nVery cool ai channel for funny stuff## Model\n\nSVD## LoRA\n\nveryVANYA/ps1-graphics-sdxl-v2## Tags\n\n- Music\n- Gaming\n- Comedy## Voice\n\nChill funny male voice## Music\n\nfunny music## Prompt\n\nMake a funny video (will be used by AI only)"
] |
cc4bd7ce63154f0a1a18e0d34aa1a3915eefe1e6 | # Dataset Card for "Geo170K"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Luckyjhg/Geo170K | [
"region:us"
] | 2023-12-22T02:49:59+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "qa_tuning", "path": "data/qa_tuning-*"}, {"split": "alignment", "path": "data/alignment-*"}]}], "dataset_info": {"features": [{"name": "image", "dtype": "string"}, {"name": "conversations", "list": [{"name": "from", "dtype": "string"}, {"name": "value", "dtype": "string"}]}], "splits": [{"name": "qa_tuning", "num_bytes": 93111889, "num_examples": 117205}, {"name": "alignment", "num_bytes": 20241610, "num_examples": 60252}], "download_size": 23754996, "dataset_size": 113353499}} | 2023-12-22T03:02:25+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "Geo170K"
More Information needed | [
"# Dataset Card for \"Geo170K\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"Geo170K\"\n\nMore Information needed"
] | [
6,
14
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"Geo170K\"\n\nMore Information needed"
] |
e897e55551297553ffb6c4cf5840dda451ea4101 | # Dataset Card for "m4singer_synth"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | Codec-SUPERB/m4singer_synth | [
"region:us"
] | 2023-12-22T03:56:17+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "original", "path": "data/original-*"}, {"split": "academicodec_hifi_16k_320d", "path": "data/academicodec_hifi_16k_320d-*"}, {"split": "academicodec_hifi_16k_320d_large_uni", "path": "data/academicodec_hifi_16k_320d_large_uni-*"}, {"split": "academicodec_hifi_24k_320d", "path": "data/academicodec_hifi_24k_320d-*"}, {"split": "audiodec_24k_320d", "path": "data/audiodec_24k_320d-*"}, {"split": "dac_16k", "path": "data/dac_16k-*"}, {"split": "dac_24k", "path": "data/dac_24k-*"}, {"split": "dac_44k", "path": "data/dac_44k-*"}, {"split": "encodec_24k_12bps", "path": "data/encodec_24k_12bps-*"}, {"split": "encodec_24k_1_5bps", "path": "data/encodec_24k_1_5bps-*"}, {"split": "encodec_24k_24bps", "path": "data/encodec_24k_24bps-*"}, {"split": "encodec_24k_3bps", "path": "data/encodec_24k_3bps-*"}, {"split": "encodec_24k_6bps", "path": "data/encodec_24k_6bps-*"}, {"split": "funcodec_en_libritts_16k_gr1nq32ds320", "path": "data/funcodec_en_libritts_16k_gr1nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_gr8nq32ds320", "path": "data/funcodec_en_libritts_16k_gr8nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds320", "path": "data/funcodec_en_libritts_16k_nq32ds320-*"}, {"split": "funcodec_en_libritts_16k_nq32ds640", "path": "data/funcodec_en_libritts_16k_nq32ds640-*"}, {"split": "funcodec_zh_en_16k_nq32ds320", "path": "data/funcodec_zh_en_16k_nq32ds320-*"}, {"split": "funcodec_zh_en_16k_nq32ds640", "path": "data/funcodec_zh_en_16k_nq32ds640-*"}, {"split": "speech_tokenizer_16k", "path": "data/speech_tokenizer_16k-*"}]}], "dataset_info": {"features": [{"name": "audio", "dtype": {"audio": {"sampling_rate": 16000}}}, {"name": "id", "dtype": "string"}], "splits": [{"name": "original", "num_bytes": 40151638.0, "num_examples": 217}, {"name": "academicodec_hifi_16k_320d", "num_bytes": 40096637.0, "num_examples": 217}, {"name": "academicodec_hifi_16k_320d_large_uni", "num_bytes": 40096637.0, "num_examples": 217}, {"name": "academicodec_hifi_24k_320d", "num_bytes": 60154877.0, "num_examples": 217}, {"name": "audiodec_24k_320d", "num_bytes": 60275237.0, "num_examples": 217}, {"name": "dac_16k", "num_bytes": 40151855.0, "num_examples": 217}, {"name": "dac_24k", "num_bytes": 60219467.0, "num_examples": 217}, {"name": "dac_44k", "num_bytes": 110639439.0, "num_examples": 217}, {"name": "encodec_24k_12bps", "num_bytes": 60219467.0, "num_examples": 217}, {"name": "encodec_24k_1_5bps", "num_bytes": 60219467.0, "num_examples": 217}, {"name": "encodec_24k_24bps", "num_bytes": 60219467.0, "num_examples": 217}, {"name": "encodec_24k_3bps", "num_bytes": 60219467.0, "num_examples": 217}, {"name": "encodec_24k_6bps", "num_bytes": 60219467.0, "num_examples": 217}, {"name": "funcodec_en_libritts_16k_gr1nq32ds320", "num_bytes": 40143099.0, "num_examples": 217}, {"name": "funcodec_en_libritts_16k_gr8nq32ds320", "num_bytes": 40143099.0, "num_examples": 217}, {"name": "funcodec_en_libritts_16k_nq32ds320", "num_bytes": 40151855.0, "num_examples": 217}, {"name": "funcodec_en_libritts_16k_nq32ds640", "num_bytes": 40151855.0, "num_examples": 217}, {"name": "funcodec_zh_en_16k_nq32ds320", "num_bytes": 40151855.0, "num_examples": 217}, {"name": "funcodec_zh_en_16k_nq32ds640", "num_bytes": 40151855.0, "num_examples": 217}, {"name": "speech_tokenizer_16k", "num_bytes": 40206077.0, "num_examples": 217}], "download_size": 1017913637, "dataset_size": 1033982817.0}} | 2024-02-02T08:11:27+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "m4singer_synth"
More Information needed | [
"# Dataset Card for \"m4singer_synth\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"m4singer_synth\"\n\nMore Information needed"
] | [
6,
17
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"m4singer_synth\"\n\nMore Information needed"
] |
c86346ab779194dfd65323c01bbce3d6ef75c730 | # Dataset Card for "openai_summarize_comparisons_relabel_pythia1b_iter1_temp0.7"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | arianhosseini/openai_summarize_comparisons_relabel_pythia1b_iter1_temp0.7 | [
"region:us"
] | 2023-12-22T03:58:21+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 35977664, "num_examples": 20000}], "download_size": 21784615, "dataset_size": 35977664}} | 2023-12-22T03:58:24+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "openai_summarize_comparisons_relabel_pythia1b_iter1_temp0.7"
More Information needed | [
"# Dataset Card for \"openai_summarize_comparisons_relabel_pythia1b_iter1_temp0.7\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"openai_summarize_comparisons_relabel_pythia1b_iter1_temp0.7\"\n\nMore Information needed"
] | [
6,
34
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"openai_summarize_comparisons_relabel_pythia1b_iter1_temp0.7\"\n\nMore Information needed"
] |
c87fa71781b7c1fe07c4e1af0c940aa89e36c245 | # Dataset Card for "synthia_binarized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jan-hq/synthia_binarized | [
"region:us"
] | 2023-12-22T04:00:07+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 250608607.43749684, "num_examples": 106957}, {"name": "test", "num_bytes": 27847483.562503155, "num_examples": 11885}], "download_size": 131510085, "dataset_size": 278456091.0}} | 2023-12-22T04:00:33+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "synthia_binarized"
More Information needed | [
"# Dataset Card for \"synthia_binarized\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"synthia_binarized\"\n\nMore Information needed"
] | [
6,
17
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"synthia_binarized\"\n\nMore Information needed"
] |
dd94f5cbe4259780a38663e82c4ccf4d0e4e2bab |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | Mahalingam/medsum | [
"region:us"
] | 2023-12-22T04:01:04+00:00 | {} | 2023-12-22T05:01:41+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
34,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
8ce222922d2063a9c6da2999f013fa5ae7aea744 |
# Dataset Card for Evaluation run of luffycodes/vicuna-class-shishya-all-hal-13b-ep3
<!-- Provide a quick summary of the dataset. -->
Dataset automatically created during the evaluation run of model [luffycodes/vicuna-class-shishya-all-hal-13b-ep3](https://huggingface.co/luffycodes/vicuna-class-shishya-all-hal-13b-ep3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).
To load the details from a run, you can for instance do the following:
```python
from datasets import load_dataset
data = load_dataset("open-llm-leaderboard/details_luffycodes__vicuna-class-shishya-all-hal-13b-ep3",
"harness_winogrande_5",
split="train")
```
## Latest results
These are the [latest results from run 2023-12-22T04:30:49.229801](https://huggingface.co/datasets/open-llm-leaderboard/details_luffycodes__vicuna-class-shishya-all-hal-13b-ep3/blob/main/results_2023-12-22T04-30-49.229801.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
```python
{
"all": {
"acc": 0.5604887588394379,
"acc_stderr": 0.033604319980563735,
"acc_norm": 0.5667605430192505,
"acc_norm_stderr": 0.03431410978810103,
"mc1": 0.2937576499388005,
"mc1_stderr": 0.015945068581236614,
"mc2": 0.4275151973594472,
"mc2_stderr": 0.015073406033459216
},
"harness|arc:challenge|25": {
"acc": 0.44368600682593856,
"acc_stderr": 0.014518421825670454,
"acc_norm": 0.4863481228668942,
"acc_norm_stderr": 0.014605943429860942
},
"harness|hellaswag|10": {
"acc": 0.6062537343158734,
"acc_stderr": 0.004875812021462007,
"acc_norm": 0.8028281218880701,
"acc_norm_stderr": 0.003970500832922039
},
"harness|hendrycksTest-abstract_algebra|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621502,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621502
},
"harness|hendrycksTest-anatomy|5": {
"acc": 0.4666666666666667,
"acc_stderr": 0.043097329010363554,
"acc_norm": 0.4666666666666667,
"acc_norm_stderr": 0.043097329010363554
},
"harness|hendrycksTest-astronomy|5": {
"acc": 0.5657894736842105,
"acc_stderr": 0.040335656678483205,
"acc_norm": 0.5657894736842105,
"acc_norm_stderr": 0.040335656678483205
},
"harness|hendrycksTest-business_ethics|5": {
"acc": 0.56,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.56,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-clinical_knowledge|5": {
"acc": 0.6226415094339622,
"acc_stderr": 0.02983280811479601,
"acc_norm": 0.6226415094339622,
"acc_norm_stderr": 0.02983280811479601
},
"harness|hendrycksTest-college_biology|5": {
"acc": 0.5763888888888888,
"acc_stderr": 0.04132125019723369,
"acc_norm": 0.5763888888888888,
"acc_norm_stderr": 0.04132125019723369
},
"harness|hendrycksTest-college_chemistry|5": {
"acc": 0.44,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.44,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-college_computer_science|5": {
"acc": 0.4,
"acc_stderr": 0.04923659639173309,
"acc_norm": 0.4,
"acc_norm_stderr": 0.04923659639173309
},
"harness|hendrycksTest-college_mathematics|5": {
"acc": 0.32,
"acc_stderr": 0.04688261722621505,
"acc_norm": 0.32,
"acc_norm_stderr": 0.04688261722621505
},
"harness|hendrycksTest-college_medicine|5": {
"acc": 0.5722543352601156,
"acc_stderr": 0.03772446857518027,
"acc_norm": 0.5722543352601156,
"acc_norm_stderr": 0.03772446857518027
},
"harness|hendrycksTest-college_physics|5": {
"acc": 0.3137254901960784,
"acc_stderr": 0.04617034827006717,
"acc_norm": 0.3137254901960784,
"acc_norm_stderr": 0.04617034827006717
},
"harness|hendrycksTest-computer_security|5": {
"acc": 0.69,
"acc_stderr": 0.04648231987117316,
"acc_norm": 0.69,
"acc_norm_stderr": 0.04648231987117316
},
"harness|hendrycksTest-conceptual_physics|5": {
"acc": 0.42127659574468085,
"acc_stderr": 0.03227834510146268,
"acc_norm": 0.42127659574468085,
"acc_norm_stderr": 0.03227834510146268
},
"harness|hendrycksTest-econometrics|5": {
"acc": 0.2982456140350877,
"acc_stderr": 0.04303684033537315,
"acc_norm": 0.2982456140350877,
"acc_norm_stderr": 0.04303684033537315
},
"harness|hendrycksTest-electrical_engineering|5": {
"acc": 0.503448275862069,
"acc_stderr": 0.0416656757710158,
"acc_norm": 0.503448275862069,
"acc_norm_stderr": 0.0416656757710158
},
"harness|hendrycksTest-elementary_mathematics|5": {
"acc": 0.3333333333333333,
"acc_stderr": 0.0242785680243077,
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.0242785680243077
},
"harness|hendrycksTest-formal_logic|5": {
"acc": 0.3333333333333333,
"acc_stderr": 0.04216370213557835,
"acc_norm": 0.3333333333333333,
"acc_norm_stderr": 0.04216370213557835
},
"harness|hendrycksTest-global_facts|5": {
"acc": 0.42,
"acc_stderr": 0.049604496374885836,
"acc_norm": 0.42,
"acc_norm_stderr": 0.049604496374885836
},
"harness|hendrycksTest-high_school_biology|5": {
"acc": 0.6451612903225806,
"acc_stderr": 0.02721888977330877,
"acc_norm": 0.6451612903225806,
"acc_norm_stderr": 0.02721888977330877
},
"harness|hendrycksTest-high_school_chemistry|5": {
"acc": 0.47783251231527096,
"acc_stderr": 0.035145285621750094,
"acc_norm": 0.47783251231527096,
"acc_norm_stderr": 0.035145285621750094
},
"harness|hendrycksTest-high_school_computer_science|5": {
"acc": 0.52,
"acc_stderr": 0.050211673156867795,
"acc_norm": 0.52,
"acc_norm_stderr": 0.050211673156867795
},
"harness|hendrycksTest-high_school_european_history|5": {
"acc": 0.696969696969697,
"acc_stderr": 0.03588624800091707,
"acc_norm": 0.696969696969697,
"acc_norm_stderr": 0.03588624800091707
},
"harness|hendrycksTest-high_school_geography|5": {
"acc": 0.6919191919191919,
"acc_stderr": 0.03289477330098616,
"acc_norm": 0.6919191919191919,
"acc_norm_stderr": 0.03289477330098616
},
"harness|hendrycksTest-high_school_government_and_politics|5": {
"acc": 0.8031088082901554,
"acc_stderr": 0.02869787397186068,
"acc_norm": 0.8031088082901554,
"acc_norm_stderr": 0.02869787397186068
},
"harness|hendrycksTest-high_school_macroeconomics|5": {
"acc": 0.5538461538461539,
"acc_stderr": 0.02520357177302833,
"acc_norm": 0.5538461538461539,
"acc_norm_stderr": 0.02520357177302833
},
"harness|hendrycksTest-high_school_mathematics|5": {
"acc": 0.3037037037037037,
"acc_stderr": 0.028037929969114986,
"acc_norm": 0.3037037037037037,
"acc_norm_stderr": 0.028037929969114986
},
"harness|hendrycksTest-high_school_microeconomics|5": {
"acc": 0.6134453781512605,
"acc_stderr": 0.0316314580755238,
"acc_norm": 0.6134453781512605,
"acc_norm_stderr": 0.0316314580755238
},
"harness|hendrycksTest-high_school_physics|5": {
"acc": 0.33112582781456956,
"acc_stderr": 0.038425817186598696,
"acc_norm": 0.33112582781456956,
"acc_norm_stderr": 0.038425817186598696
},
"harness|hendrycksTest-high_school_psychology|5": {
"acc": 0.744954128440367,
"acc_stderr": 0.018688500856535818,
"acc_norm": 0.744954128440367,
"acc_norm_stderr": 0.018688500856535818
},
"harness|hendrycksTest-high_school_statistics|5": {
"acc": 0.4212962962962963,
"acc_stderr": 0.03367462138896079,
"acc_norm": 0.4212962962962963,
"acc_norm_stderr": 0.03367462138896079
},
"harness|hendrycksTest-high_school_us_history|5": {
"acc": 0.7598039215686274,
"acc_stderr": 0.02998373305591362,
"acc_norm": 0.7598039215686274,
"acc_norm_stderr": 0.02998373305591362
},
"harness|hendrycksTest-high_school_world_history|5": {
"acc": 0.7721518987341772,
"acc_stderr": 0.027303484599069425,
"acc_norm": 0.7721518987341772,
"acc_norm_stderr": 0.027303484599069425
},
"harness|hendrycksTest-human_aging|5": {
"acc": 0.6547085201793722,
"acc_stderr": 0.03191100192835795,
"acc_norm": 0.6547085201793722,
"acc_norm_stderr": 0.03191100192835795
},
"harness|hendrycksTest-human_sexuality|5": {
"acc": 0.6564885496183206,
"acc_stderr": 0.041649760719448786,
"acc_norm": 0.6564885496183206,
"acc_norm_stderr": 0.041649760719448786
},
"harness|hendrycksTest-international_law|5": {
"acc": 0.768595041322314,
"acc_stderr": 0.03849856098794088,
"acc_norm": 0.768595041322314,
"acc_norm_stderr": 0.03849856098794088
},
"harness|hendrycksTest-jurisprudence|5": {
"acc": 0.75,
"acc_stderr": 0.04186091791394607,
"acc_norm": 0.75,
"acc_norm_stderr": 0.04186091791394607
},
"harness|hendrycksTest-logical_fallacies|5": {
"acc": 0.6871165644171779,
"acc_stderr": 0.03642914578292406,
"acc_norm": 0.6871165644171779,
"acc_norm_stderr": 0.03642914578292406
},
"harness|hendrycksTest-machine_learning|5": {
"acc": 0.42857142857142855,
"acc_stderr": 0.04697113923010212,
"acc_norm": 0.42857142857142855,
"acc_norm_stderr": 0.04697113923010212
},
"harness|hendrycksTest-management|5": {
"acc": 0.7281553398058253,
"acc_stderr": 0.044052680241409216,
"acc_norm": 0.7281553398058253,
"acc_norm_stderr": 0.044052680241409216
},
"harness|hendrycksTest-marketing|5": {
"acc": 0.8290598290598291,
"acc_stderr": 0.02466249684520982,
"acc_norm": 0.8290598290598291,
"acc_norm_stderr": 0.02466249684520982
},
"harness|hendrycksTest-medical_genetics|5": {
"acc": 0.56,
"acc_stderr": 0.04988876515698589,
"acc_norm": 0.56,
"acc_norm_stderr": 0.04988876515698589
},
"harness|hendrycksTest-miscellaneous|5": {
"acc": 0.7535121328224776,
"acc_stderr": 0.015411308769686936,
"acc_norm": 0.7535121328224776,
"acc_norm_stderr": 0.015411308769686936
},
"harness|hendrycksTest-moral_disputes|5": {
"acc": 0.615606936416185,
"acc_stderr": 0.02618966696627204,
"acc_norm": 0.615606936416185,
"acc_norm_stderr": 0.02618966696627204
},
"harness|hendrycksTest-moral_scenarios|5": {
"acc": 0.27932960893854747,
"acc_stderr": 0.015005762446786164,
"acc_norm": 0.27932960893854747,
"acc_norm_stderr": 0.015005762446786164
},
"harness|hendrycksTest-nutrition|5": {
"acc": 0.6274509803921569,
"acc_stderr": 0.027684181883302895,
"acc_norm": 0.6274509803921569,
"acc_norm_stderr": 0.027684181883302895
},
"harness|hendrycksTest-philosophy|5": {
"acc": 0.6334405144694534,
"acc_stderr": 0.027368078243971646,
"acc_norm": 0.6334405144694534,
"acc_norm_stderr": 0.027368078243971646
},
"harness|hendrycksTest-prehistory|5": {
"acc": 0.6358024691358025,
"acc_stderr": 0.026774929899722313,
"acc_norm": 0.6358024691358025,
"acc_norm_stderr": 0.026774929899722313
},
"harness|hendrycksTest-professional_accounting|5": {
"acc": 0.41134751773049644,
"acc_stderr": 0.02935491115994098,
"acc_norm": 0.41134751773049644,
"acc_norm_stderr": 0.02935491115994098
},
"harness|hendrycksTest-professional_law|5": {
"acc": 0.42503259452411996,
"acc_stderr": 0.012625879884892,
"acc_norm": 0.42503259452411996,
"acc_norm_stderr": 0.012625879884892
},
"harness|hendrycksTest-professional_medicine|5": {
"acc": 0.5073529411764706,
"acc_stderr": 0.030369552523902173,
"acc_norm": 0.5073529411764706,
"acc_norm_stderr": 0.030369552523902173
},
"harness|hendrycksTest-professional_psychology|5": {
"acc": 0.5849673202614379,
"acc_stderr": 0.019933627776857428,
"acc_norm": 0.5849673202614379,
"acc_norm_stderr": 0.019933627776857428
},
"harness|hendrycksTest-public_relations|5": {
"acc": 0.6636363636363637,
"acc_stderr": 0.04525393596302506,
"acc_norm": 0.6636363636363637,
"acc_norm_stderr": 0.04525393596302506
},
"harness|hendrycksTest-security_studies|5": {
"acc": 0.6571428571428571,
"acc_stderr": 0.030387262919547728,
"acc_norm": 0.6571428571428571,
"acc_norm_stderr": 0.030387262919547728
},
"harness|hendrycksTest-sociology|5": {
"acc": 0.7562189054726368,
"acc_stderr": 0.030360490154014666,
"acc_norm": 0.7562189054726368,
"acc_norm_stderr": 0.030360490154014666
},
"harness|hendrycksTest-us_foreign_policy|5": {
"acc": 0.82,
"acc_stderr": 0.038612291966536934,
"acc_norm": 0.82,
"acc_norm_stderr": 0.038612291966536934
},
"harness|hendrycksTest-virology|5": {
"acc": 0.45180722891566266,
"acc_stderr": 0.03874371556587953,
"acc_norm": 0.45180722891566266,
"acc_norm_stderr": 0.03874371556587953
},
"harness|hendrycksTest-world_religions|5": {
"acc": 0.7719298245614035,
"acc_stderr": 0.032180937956023566,
"acc_norm": 0.7719298245614035,
"acc_norm_stderr": 0.032180937956023566
},
"harness|truthfulqa:mc|0": {
"mc1": 0.2937576499388005,
"mc1_stderr": 0.015945068581236614,
"mc2": 0.4275151973594472,
"mc2_stderr": 0.015073406033459216
},
"harness|winogrande|5": {
"acc": 0.7316495659037096,
"acc_stderr": 0.012453340359561195
},
"harness|gsm8k|5": {
"acc": 0.2585291887793783,
"acc_stderr": 0.012059911372516129
}
}
```
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | open-llm-leaderboard/details_luffycodes__vicuna-class-shishya-all-hal-13b-ep3 | [
"region:us"
] | 2023-12-22T04:33:08+00:00 | {"pretty_name": "Evaluation run of luffycodes/vicuna-class-shishya-all-hal-13b-ep3", "dataset_summary": "Dataset automatically created during the evaluation run of model [luffycodes/vicuna-class-shishya-all-hal-13b-ep3](https://huggingface.co/luffycodes/vicuna-class-shishya-all-hal-13b-ep3) on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard).\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the [Open LLM Leaderboard](https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard)).\n\nTo load the details from a run, you can for instance do the following:\n```python\nfrom datasets import load_dataset\ndata = load_dataset(\"open-llm-leaderboard/details_luffycodes__vicuna-class-shishya-all-hal-13b-ep3\",\n\t\"harness_winogrande_5\",\n\tsplit=\"train\")\n```\n\n## Latest results\n\nThese are the [latest results from run 2023-12-22T04:30:49.229801](https://huggingface.co/datasets/open-llm-leaderboard/details_luffycodes__vicuna-class-shishya-all-hal-13b-ep3/blob/main/results_2023-12-22T04-30-49.229801.json)(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):\n\n```python\n{\n \"all\": {\n \"acc\": 0.5604887588394379,\n \"acc_stderr\": 0.033604319980563735,\n \"acc_norm\": 0.5667605430192505,\n \"acc_norm_stderr\": 0.03431410978810103,\n \"mc1\": 0.2937576499388005,\n \"mc1_stderr\": 0.015945068581236614,\n \"mc2\": 0.4275151973594472,\n \"mc2_stderr\": 0.015073406033459216\n },\n \"harness|arc:challenge|25\": {\n \"acc\": 0.44368600682593856,\n \"acc_stderr\": 0.014518421825670454,\n \"acc_norm\": 0.4863481228668942,\n \"acc_norm_stderr\": 0.014605943429860942\n },\n \"harness|hellaswag|10\": {\n \"acc\": 0.6062537343158734,\n \"acc_stderr\": 0.004875812021462007,\n \"acc_norm\": 0.8028281218880701,\n \"acc_norm_stderr\": 0.003970500832922039\n },\n \"harness|hendrycksTest-abstract_algebra|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621502,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621502\n },\n \"harness|hendrycksTest-anatomy|5\": {\n \"acc\": 0.4666666666666667,\n \"acc_stderr\": 0.043097329010363554,\n \"acc_norm\": 0.4666666666666667,\n \"acc_norm_stderr\": 0.043097329010363554\n },\n \"harness|hendrycksTest-astronomy|5\": {\n \"acc\": 0.5657894736842105,\n \"acc_stderr\": 0.040335656678483205,\n \"acc_norm\": 0.5657894736842105,\n \"acc_norm_stderr\": 0.040335656678483205\n },\n \"harness|hendrycksTest-business_ethics|5\": {\n \"acc\": 0.56,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.56,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-clinical_knowledge|5\": {\n \"acc\": 0.6226415094339622,\n \"acc_stderr\": 0.02983280811479601,\n \"acc_norm\": 0.6226415094339622,\n \"acc_norm_stderr\": 0.02983280811479601\n },\n \"harness|hendrycksTest-college_biology|5\": {\n \"acc\": 0.5763888888888888,\n \"acc_stderr\": 0.04132125019723369,\n \"acc_norm\": 0.5763888888888888,\n \"acc_norm_stderr\": 0.04132125019723369\n },\n \"harness|hendrycksTest-college_chemistry|5\": {\n \"acc\": 0.44,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.44,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-college_computer_science|5\": {\n \"acc\": 0.4,\n \"acc_stderr\": 0.04923659639173309,\n \"acc_norm\": 0.4,\n \"acc_norm_stderr\": 0.04923659639173309\n },\n \"harness|hendrycksTest-college_mathematics|5\": {\n \"acc\": 0.32,\n \"acc_stderr\": 0.04688261722621505,\n \"acc_norm\": 0.32,\n \"acc_norm_stderr\": 0.04688261722621505\n },\n \"harness|hendrycksTest-college_medicine|5\": {\n \"acc\": 0.5722543352601156,\n \"acc_stderr\": 0.03772446857518027,\n \"acc_norm\": 0.5722543352601156,\n \"acc_norm_stderr\": 0.03772446857518027\n },\n \"harness|hendrycksTest-college_physics|5\": {\n \"acc\": 0.3137254901960784,\n \"acc_stderr\": 0.04617034827006717,\n \"acc_norm\": 0.3137254901960784,\n \"acc_norm_stderr\": 0.04617034827006717\n },\n \"harness|hendrycksTest-computer_security|5\": {\n \"acc\": 0.69,\n \"acc_stderr\": 0.04648231987117316,\n \"acc_norm\": 0.69,\n \"acc_norm_stderr\": 0.04648231987117316\n },\n \"harness|hendrycksTest-conceptual_physics|5\": {\n \"acc\": 0.42127659574468085,\n \"acc_stderr\": 0.03227834510146268,\n \"acc_norm\": 0.42127659574468085,\n \"acc_norm_stderr\": 0.03227834510146268\n },\n \"harness|hendrycksTest-econometrics|5\": {\n \"acc\": 0.2982456140350877,\n \"acc_stderr\": 0.04303684033537315,\n \"acc_norm\": 0.2982456140350877,\n \"acc_norm_stderr\": 0.04303684033537315\n },\n \"harness|hendrycksTest-electrical_engineering|5\": {\n \"acc\": 0.503448275862069,\n \"acc_stderr\": 0.0416656757710158,\n \"acc_norm\": 0.503448275862069,\n \"acc_norm_stderr\": 0.0416656757710158\n },\n \"harness|hendrycksTest-elementary_mathematics|5\": {\n \"acc\": 0.3333333333333333,\n \"acc_stderr\": 0.0242785680243077,\n \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.0242785680243077\n },\n \"harness|hendrycksTest-formal_logic|5\": {\n \"acc\": 0.3333333333333333,\n \"acc_stderr\": 0.04216370213557835,\n \"acc_norm\": 0.3333333333333333,\n \"acc_norm_stderr\": 0.04216370213557835\n },\n \"harness|hendrycksTest-global_facts|5\": {\n \"acc\": 0.42,\n \"acc_stderr\": 0.049604496374885836,\n \"acc_norm\": 0.42,\n \"acc_norm_stderr\": 0.049604496374885836\n },\n \"harness|hendrycksTest-high_school_biology|5\": {\n \"acc\": 0.6451612903225806,\n \"acc_stderr\": 0.02721888977330877,\n \"acc_norm\": 0.6451612903225806,\n \"acc_norm_stderr\": 0.02721888977330877\n },\n \"harness|hendrycksTest-high_school_chemistry|5\": {\n \"acc\": 0.47783251231527096,\n \"acc_stderr\": 0.035145285621750094,\n \"acc_norm\": 0.47783251231527096,\n \"acc_norm_stderr\": 0.035145285621750094\n },\n \"harness|hendrycksTest-high_school_computer_science|5\": {\n \"acc\": 0.52,\n \"acc_stderr\": 0.050211673156867795,\n \"acc_norm\": 0.52,\n \"acc_norm_stderr\": 0.050211673156867795\n },\n \"harness|hendrycksTest-high_school_european_history|5\": {\n \"acc\": 0.696969696969697,\n \"acc_stderr\": 0.03588624800091707,\n \"acc_norm\": 0.696969696969697,\n \"acc_norm_stderr\": 0.03588624800091707\n },\n \"harness|hendrycksTest-high_school_geography|5\": {\n \"acc\": 0.6919191919191919,\n \"acc_stderr\": 0.03289477330098616,\n \"acc_norm\": 0.6919191919191919,\n \"acc_norm_stderr\": 0.03289477330098616\n },\n \"harness|hendrycksTest-high_school_government_and_politics|5\": {\n \"acc\": 0.8031088082901554,\n \"acc_stderr\": 0.02869787397186068,\n \"acc_norm\": 0.8031088082901554,\n \"acc_norm_stderr\": 0.02869787397186068\n },\n \"harness|hendrycksTest-high_school_macroeconomics|5\": {\n \"acc\": 0.5538461538461539,\n \"acc_stderr\": 0.02520357177302833,\n \"acc_norm\": 0.5538461538461539,\n \"acc_norm_stderr\": 0.02520357177302833\n },\n \"harness|hendrycksTest-high_school_mathematics|5\": {\n \"acc\": 0.3037037037037037,\n \"acc_stderr\": 0.028037929969114986,\n \"acc_norm\": 0.3037037037037037,\n \"acc_norm_stderr\": 0.028037929969114986\n },\n \"harness|hendrycksTest-high_school_microeconomics|5\": {\n \"acc\": 0.6134453781512605,\n \"acc_stderr\": 0.0316314580755238,\n \"acc_norm\": 0.6134453781512605,\n \"acc_norm_stderr\": 0.0316314580755238\n },\n \"harness|hendrycksTest-high_school_physics|5\": {\n \"acc\": 0.33112582781456956,\n \"acc_stderr\": 0.038425817186598696,\n \"acc_norm\": 0.33112582781456956,\n \"acc_norm_stderr\": 0.038425817186598696\n },\n \"harness|hendrycksTest-high_school_psychology|5\": {\n \"acc\": 0.744954128440367,\n \"acc_stderr\": 0.018688500856535818,\n \"acc_norm\": 0.744954128440367,\n \"acc_norm_stderr\": 0.018688500856535818\n },\n \"harness|hendrycksTest-high_school_statistics|5\": {\n \"acc\": 0.4212962962962963,\n \"acc_stderr\": 0.03367462138896079,\n \"acc_norm\": 0.4212962962962963,\n \"acc_norm_stderr\": 0.03367462138896079\n },\n \"harness|hendrycksTest-high_school_us_history|5\": {\n \"acc\": 0.7598039215686274,\n \"acc_stderr\": 0.02998373305591362,\n \"acc_norm\": 0.7598039215686274,\n \"acc_norm_stderr\": 0.02998373305591362\n },\n \"harness|hendrycksTest-high_school_world_history|5\": {\n \"acc\": 0.7721518987341772,\n \"acc_stderr\": 0.027303484599069425,\n \"acc_norm\": 0.7721518987341772,\n \"acc_norm_stderr\": 0.027303484599069425\n },\n \"harness|hendrycksTest-human_aging|5\": {\n \"acc\": 0.6547085201793722,\n \"acc_stderr\": 0.03191100192835795,\n \"acc_norm\": 0.6547085201793722,\n \"acc_norm_stderr\": 0.03191100192835795\n },\n \"harness|hendrycksTest-human_sexuality|5\": {\n \"acc\": 0.6564885496183206,\n \"acc_stderr\": 0.041649760719448786,\n \"acc_norm\": 0.6564885496183206,\n \"acc_norm_stderr\": 0.041649760719448786\n },\n \"harness|hendrycksTest-international_law|5\": {\n \"acc\": 0.768595041322314,\n \"acc_stderr\": 0.03849856098794088,\n \"acc_norm\": 0.768595041322314,\n \"acc_norm_stderr\": 0.03849856098794088\n },\n \"harness|hendrycksTest-jurisprudence|5\": {\n \"acc\": 0.75,\n \"acc_stderr\": 0.04186091791394607,\n \"acc_norm\": 0.75,\n \"acc_norm_stderr\": 0.04186091791394607\n },\n \"harness|hendrycksTest-logical_fallacies|5\": {\n \"acc\": 0.6871165644171779,\n \"acc_stderr\": 0.03642914578292406,\n \"acc_norm\": 0.6871165644171779,\n \"acc_norm_stderr\": 0.03642914578292406\n },\n \"harness|hendrycksTest-machine_learning|5\": {\n \"acc\": 0.42857142857142855,\n \"acc_stderr\": 0.04697113923010212,\n \"acc_norm\": 0.42857142857142855,\n \"acc_norm_stderr\": 0.04697113923010212\n },\n \"harness|hendrycksTest-management|5\": {\n \"acc\": 0.7281553398058253,\n \"acc_stderr\": 0.044052680241409216,\n \"acc_norm\": 0.7281553398058253,\n \"acc_norm_stderr\": 0.044052680241409216\n },\n \"harness|hendrycksTest-marketing|5\": {\n \"acc\": 0.8290598290598291,\n \"acc_stderr\": 0.02466249684520982,\n \"acc_norm\": 0.8290598290598291,\n \"acc_norm_stderr\": 0.02466249684520982\n },\n \"harness|hendrycksTest-medical_genetics|5\": {\n \"acc\": 0.56,\n \"acc_stderr\": 0.04988876515698589,\n \"acc_norm\": 0.56,\n \"acc_norm_stderr\": 0.04988876515698589\n },\n \"harness|hendrycksTest-miscellaneous|5\": {\n \"acc\": 0.7535121328224776,\n \"acc_stderr\": 0.015411308769686936,\n \"acc_norm\": 0.7535121328224776,\n \"acc_norm_stderr\": 0.015411308769686936\n },\n \"harness|hendrycksTest-moral_disputes|5\": {\n \"acc\": 0.615606936416185,\n \"acc_stderr\": 0.02618966696627204,\n \"acc_norm\": 0.615606936416185,\n \"acc_norm_stderr\": 0.02618966696627204\n },\n \"harness|hendrycksTest-moral_scenarios|5\": {\n \"acc\": 0.27932960893854747,\n \"acc_stderr\": 0.015005762446786164,\n \"acc_norm\": 0.27932960893854747,\n \"acc_norm_stderr\": 0.015005762446786164\n },\n \"harness|hendrycksTest-nutrition|5\": {\n \"acc\": 0.6274509803921569,\n \"acc_stderr\": 0.027684181883302895,\n \"acc_norm\": 0.6274509803921569,\n \"acc_norm_stderr\": 0.027684181883302895\n },\n \"harness|hendrycksTest-philosophy|5\": {\n \"acc\": 0.6334405144694534,\n \"acc_stderr\": 0.027368078243971646,\n \"acc_norm\": 0.6334405144694534,\n \"acc_norm_stderr\": 0.027368078243971646\n },\n \"harness|hendrycksTest-prehistory|5\": {\n \"acc\": 0.6358024691358025,\n \"acc_stderr\": 0.026774929899722313,\n \"acc_norm\": 0.6358024691358025,\n \"acc_norm_stderr\": 0.026774929899722313\n },\n \"harness|hendrycksTest-professional_accounting|5\": {\n \"acc\": 0.41134751773049644,\n \"acc_stderr\": 0.02935491115994098,\n \"acc_norm\": 0.41134751773049644,\n \"acc_norm_stderr\": 0.02935491115994098\n },\n \"harness|hendrycksTest-professional_law|5\": {\n \"acc\": 0.42503259452411996,\n \"acc_stderr\": 0.012625879884892,\n \"acc_norm\": 0.42503259452411996,\n \"acc_norm_stderr\": 0.012625879884892\n },\n \"harness|hendrycksTest-professional_medicine|5\": {\n \"acc\": 0.5073529411764706,\n \"acc_stderr\": 0.030369552523902173,\n \"acc_norm\": 0.5073529411764706,\n \"acc_norm_stderr\": 0.030369552523902173\n },\n \"harness|hendrycksTest-professional_psychology|5\": {\n \"acc\": 0.5849673202614379,\n \"acc_stderr\": 0.019933627776857428,\n \"acc_norm\": 0.5849673202614379,\n \"acc_norm_stderr\": 0.019933627776857428\n },\n \"harness|hendrycksTest-public_relations|5\": {\n \"acc\": 0.6636363636363637,\n \"acc_stderr\": 0.04525393596302506,\n \"acc_norm\": 0.6636363636363637,\n \"acc_norm_stderr\": 0.04525393596302506\n },\n \"harness|hendrycksTest-security_studies|5\": {\n \"acc\": 0.6571428571428571,\n \"acc_stderr\": 0.030387262919547728,\n \"acc_norm\": 0.6571428571428571,\n \"acc_norm_stderr\": 0.030387262919547728\n },\n \"harness|hendrycksTest-sociology|5\": {\n \"acc\": 0.7562189054726368,\n \"acc_stderr\": 0.030360490154014666,\n \"acc_norm\": 0.7562189054726368,\n \"acc_norm_stderr\": 0.030360490154014666\n },\n \"harness|hendrycksTest-us_foreign_policy|5\": {\n \"acc\": 0.82,\n \"acc_stderr\": 0.038612291966536934,\n \"acc_norm\": 0.82,\n \"acc_norm_stderr\": 0.038612291966536934\n },\n \"harness|hendrycksTest-virology|5\": {\n \"acc\": 0.45180722891566266,\n \"acc_stderr\": 0.03874371556587953,\n \"acc_norm\": 0.45180722891566266,\n \"acc_norm_stderr\": 0.03874371556587953\n },\n \"harness|hendrycksTest-world_religions|5\": {\n \"acc\": 0.7719298245614035,\n \"acc_stderr\": 0.032180937956023566,\n \"acc_norm\": 0.7719298245614035,\n \"acc_norm_stderr\": 0.032180937956023566\n },\n \"harness|truthfulqa:mc|0\": {\n \"mc1\": 0.2937576499388005,\n \"mc1_stderr\": 0.015945068581236614,\n \"mc2\": 0.4275151973594472,\n \"mc2_stderr\": 0.015073406033459216\n },\n \"harness|winogrande|5\": {\n \"acc\": 0.7316495659037096,\n \"acc_stderr\": 0.012453340359561195\n },\n \"harness|gsm8k|5\": {\n \"acc\": 0.2585291887793783,\n \"acc_stderr\": 0.012059911372516129\n }\n}\n```", "repo_url": "https://huggingface.co/luffycodes/vicuna-class-shishya-all-hal-13b-ep3", "leaderboard_url": "https://huggingface.co/spaces/HuggingFaceH4/open_llm_leaderboard", "point_of_contact": "[email protected]", "configs": [{"config_name": "harness_arc_challenge_25", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|arc:challenge|25_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|arc:challenge|25_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_gsm8k_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|gsm8k|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|gsm8k|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hellaswag_10", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hellaswag|10_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hellaswag|10_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-anatomy|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-astronomy|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-business_ethics|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-college_biology|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-college_chemistry|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-college_computer_science|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-college_mathematics|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-college_medicine|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-college_physics|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-computer_security|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-econometrics|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-formal_logic|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-global_facts|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-high_school_biology|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-high_school_geography|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-high_school_physics|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-human_aging|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-human_sexuality|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-international_law|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-jurisprudence|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-machine_learning|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-management|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-marketing|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-medical_genetics|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-miscellaneous|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-moral_disputes|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-nutrition|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-philosophy|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-prehistory|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-professional_accounting|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-professional_law|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-professional_medicine|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-professional_psychology|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-public_relations|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-security_studies|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-sociology|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-virology|5_2023-12-22T04-30-49.229801.parquet", "**/details_harness|hendrycksTest-world_religions|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_abstract_algebra_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-abstract_algebra|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_anatomy_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-anatomy|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_astronomy_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-astronomy|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_business_ethics_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-business_ethics|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_clinical_knowledge_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-clinical_knowledge|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_biology_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_biology|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_chemistry_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_chemistry|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_computer_science_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_computer_science|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_mathematics_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_mathematics|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_medicine_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_medicine|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_college_physics_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-college_physics|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_computer_security_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-computer_security|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_conceptual_physics_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-conceptual_physics|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_econometrics_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-econometrics|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_electrical_engineering_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-electrical_engineering|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_elementary_mathematics_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-elementary_mathematics|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_formal_logic_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-formal_logic|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_global_facts_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-global_facts|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_biology_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_biology|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_chemistry_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_chemistry|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_computer_science_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_computer_science|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_european_history_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_european_history|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_geography_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_geography|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_government_and_politics_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_government_and_politics|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_macroeconomics_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_macroeconomics|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_mathematics_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_mathematics|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_microeconomics_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_microeconomics|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_physics_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_physics|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_psychology_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_psychology|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_statistics_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_statistics|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_us_history_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_us_history|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_high_school_world_history_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-high_school_world_history|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_aging_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_aging|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_human_sexuality_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-human_sexuality|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_international_law_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-international_law|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_jurisprudence_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-jurisprudence|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_logical_fallacies_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-logical_fallacies|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_machine_learning_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-machine_learning|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_management_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-management|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_marketing_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-marketing|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_medical_genetics_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-medical_genetics|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_miscellaneous_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-miscellaneous|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_disputes_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_disputes|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_moral_scenarios_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-moral_scenarios|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_nutrition_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-nutrition|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_philosophy_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-philosophy|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_prehistory_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-prehistory|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_accounting_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_accounting|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_law_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_law|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_medicine_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_medicine|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_professional_psychology_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-professional_psychology|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_public_relations_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-public_relations|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_security_studies_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-security_studies|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_sociology_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-sociology|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_us_foreign_policy_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-us_foreign_policy|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_virology_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-virology|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_hendrycksTest_world_religions_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|hendrycksTest-world_religions|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_truthfulqa_mc_0", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|truthfulqa:mc|0_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "harness_winogrande_5", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["**/details_harness|winogrande|5_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["**/details_harness|winogrande|5_2023-12-22T04-30-49.229801.parquet"]}]}, {"config_name": "results", "data_files": [{"split": "2023_12_22T04_30_49.229801", "path": ["results_2023-12-22T04-30-49.229801.parquet"]}, {"split": "latest", "path": ["results_2023-12-22T04-30-49.229801.parquet"]}]}]} | 2023-12-22T04:33:28+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Evaluation run of luffycodes/vicuna-class-shishya-all-hal-13b-ep3
Dataset automatically created during the evaluation run of model luffycodes/vicuna-class-shishya-all-hal-13b-ep3 on the Open LLM Leaderboard.
The dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.
The dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The "train" split is always pointing to the latest results.
An additional configuration "results" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).
To load the details from a run, you can for instance do the following:
## Latest results
These are the latest results from run 2023-12-22T04:30:49.229801(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the "latest" split for each eval):
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Evaluation run of luffycodes/vicuna-class-shishya-all-hal-13b-ep3\n\n\n\nDataset automatically created during the evaluation run of model luffycodes/vicuna-class-shishya-all-hal-13b-ep3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-22T04:30:49.229801(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Evaluation run of luffycodes/vicuna-class-shishya-all-hal-13b-ep3\n\n\n\nDataset automatically created during the evaluation run of model luffycodes/vicuna-class-shishya-all-hal-13b-ep3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:",
"## Latest results\n\nThese are the latest results from run 2023-12-22T04:30:49.229801(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
203,
68,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Evaluation run of luffycodes/vicuna-class-shishya-all-hal-13b-ep3\n\n\n\nDataset automatically created during the evaluation run of model luffycodes/vicuna-class-shishya-all-hal-13b-ep3 on the Open LLM Leaderboard.\n\nThe dataset is composed of 63 configuration, each one coresponding to one of the evaluated task.\n\nThe dataset has been created from 1 run(s). Each run can be found as a specific split in each configuration, the split being named using the timestamp of the run.The \"train\" split is always pointing to the latest results.\n\nAn additional configuration \"results\" store all the aggregated results of the run (and is used to compute and display the aggregated metrics on the Open LLM Leaderboard).\n\nTo load the details from a run, you can for instance do the following:## Latest results\n\nThese are the latest results from run 2023-12-22T04:30:49.229801(note that their might be results for other tasks in the repos if successive evals didn't cover the same tasks. You find each in the results and the \"latest\" split for each eval):## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]"
] |
df19dc43370e96c60b9ea10c48ab756e63d89b5d | # Dataset Card for "slimorca_binarized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jan-hq/slimorca_binarized | [
"region:us"
] | 2023-12-22T05:52:58+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}], "dataset_info": {"features": [{"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 542094842.6389099, "num_examples": 327141}, {"name": "test", "num_bytes": 60234417.361090094, "num_examples": 36350}], "download_size": 306760393, "dataset_size": 602329260.0}} | 2023-12-22T05:53:36+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "slimorca_binarized"
More Information needed | [
"# Dataset Card for \"slimorca_binarized\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"slimorca_binarized\"\n\nMore Information needed"
] | [
6,
17
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"slimorca_binarized\"\n\nMore Information needed"
] |
2537ccd2e21db4cf8dc26cc6d0fb33884202530a | # **Used to train models that interact directly with shells**
Follow-up details of my process
- MacOS terminal commands for now. This dataset is still in alpha stages and will be modified.
- Contains 500 somewhat unique training examples so far.
- GPT4 seems like a good candidate for generating more data, licensing would need to be addressed.
- I fine-tuned Solar-10.7B-Instruct-v1.0 with this dataset using a slightly modified version of axolotl. Just a few epochs was enough to get it to output correctly.
- I use oobabooga/text-generation-webui with a custom chat extension for inference. No sandbox is used, it is piped directly into MacOS bash because I'm reckless. C:
- Currently working towards training an MoE (2x7B), multi-modal model (image/text) with this dataset. (BakLLaVA-1-7B + LLaVA-v1.5-7B)
- Inference stages:
1. Send the instruction to the model, expect command.
2. Detect shell command and send to sand-boxed shell.
4. Shell respose should be sent as additional input to model.
5. The final model response should be sent to user from assistant.
TODO:
- Possible "os" column to specify which system the command should be used with, maybe separate datasets for each system type.
## **Sample prompt: (in series, depends on your specific model prompt)**
```
### User:
List files in 'Downloads'
### Command:
ls ~/Downloads
```
```
### Shell:
file1.pdf file2.txt file3.zip
### Assistant:
Listing files in 'Downloads': file1.pdf file2.txt file3.zip
```
| byroneverson/shell-cmd-instruct | [
"task_categories:text-generation",
"language:en",
"license:apache-2.0",
"instruction-finetuning",
"region:us"
] | 2023-12-22T06:28:07+00:00 | {"language": ["en"], "license": "apache-2.0", "task_categories": ["text-generation"], "pretty_name": "Shell Command Instruct", "tags": ["instruction-finetuning"]} | 2024-01-11T07:10:32+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #language-English #license-apache-2.0 #instruction-finetuning #region-us
| # Used to train models that interact directly with shells
Follow-up details of my process
- MacOS terminal commands for now. This dataset is still in alpha stages and will be modified.
- Contains 500 somewhat unique training examples so far.
- GPT4 seems like a good candidate for generating more data, licensing would need to be addressed.
- I fine-tuned Solar-10.7B-Instruct-v1.0 with this dataset using a slightly modified version of axolotl. Just a few epochs was enough to get it to output correctly.
- I use oobabooga/text-generation-webui with a custom chat extension for inference. No sandbox is used, it is piped directly into MacOS bash because I'm reckless. C:
- Currently working towards training an MoE (2x7B), multi-modal model (image/text) with this dataset. (BakLLaVA-1-7B + LLaVA-v1.5-7B)
- Inference stages:
1. Send the instruction to the model, expect command.
2. Detect shell command and send to sand-boxed shell.
4. Shell respose should be sent as additional input to model.
5. The final model response should be sent to user from assistant.
TODO:
- Possible "os" column to specify which system the command should be used with, maybe separate datasets for each system type.
## Sample prompt: (in series, depends on your specific model prompt)
| [
"# Used to train models that interact directly with shells\n\nFollow-up details of my process \n\n- MacOS terminal commands for now. This dataset is still in alpha stages and will be modified.\n- Contains 500 somewhat unique training examples so far.\n- GPT4 seems like a good candidate for generating more data, licensing would need to be addressed.\n- I fine-tuned Solar-10.7B-Instruct-v1.0 with this dataset using a slightly modified version of axolotl. Just a few epochs was enough to get it to output correctly.\n- I use oobabooga/text-generation-webui with a custom chat extension for inference. No sandbox is used, it is piped directly into MacOS bash because I'm reckless. C:\n\n- Currently working towards training an MoE (2x7B), multi-modal model (image/text) with this dataset. (BakLLaVA-1-7B + LLaVA-v1.5-7B)\n\n- Inference stages:\n 1. Send the instruction to the model, expect command.\n 2. Detect shell command and send to sand-boxed shell.\n 4. Shell respose should be sent as additional input to model.\n 5. The final model response should be sent to user from assistant.\n \nTODO:\n- Possible \"os\" column to specify which system the command should be used with, maybe separate datasets for each system type.",
"## Sample prompt: (in series, depends on your specific model prompt)"
] | [
"TAGS\n#task_categories-text-generation #language-English #license-apache-2.0 #instruction-finetuning #region-us \n",
"# Used to train models that interact directly with shells\n\nFollow-up details of my process \n\n- MacOS terminal commands for now. This dataset is still in alpha stages and will be modified.\n- Contains 500 somewhat unique training examples so far.\n- GPT4 seems like a good candidate for generating more data, licensing would need to be addressed.\n- I fine-tuned Solar-10.7B-Instruct-v1.0 with this dataset using a slightly modified version of axolotl. Just a few epochs was enough to get it to output correctly.\n- I use oobabooga/text-generation-webui with a custom chat extension for inference. No sandbox is used, it is piped directly into MacOS bash because I'm reckless. C:\n\n- Currently working towards training an MoE (2x7B), multi-modal model (image/text) with this dataset. (BakLLaVA-1-7B + LLaVA-v1.5-7B)\n\n- Inference stages:\n 1. Send the instruction to the model, expect command.\n 2. Detect shell command and send to sand-boxed shell.\n 4. Shell respose should be sent as additional input to model.\n 5. The final model response should be sent to user from assistant.\n \nTODO:\n- Possible \"os\" column to specify which system the command should be used with, maybe separate datasets for each system type.",
"## Sample prompt: (in series, depends on your specific model prompt)"
] | [
35,
326,
17
] | [
"passage: TAGS\n#task_categories-text-generation #language-English #license-apache-2.0 #instruction-finetuning #region-us \n# Used to train models that interact directly with shells\n\nFollow-up details of my process \n\n- MacOS terminal commands for now. This dataset is still in alpha stages and will be modified.\n- Contains 500 somewhat unique training examples so far.\n- GPT4 seems like a good candidate for generating more data, licensing would need to be addressed.\n- I fine-tuned Solar-10.7B-Instruct-v1.0 with this dataset using a slightly modified version of axolotl. Just a few epochs was enough to get it to output correctly.\n- I use oobabooga/text-generation-webui with a custom chat extension for inference. No sandbox is used, it is piped directly into MacOS bash because I'm reckless. C:\n\n- Currently working towards training an MoE (2x7B), multi-modal model (image/text) with this dataset. (BakLLaVA-1-7B + LLaVA-v1.5-7B)\n\n- Inference stages:\n 1. Send the instruction to the model, expect command.\n 2. Detect shell command and send to sand-boxed shell.\n 4. Shell respose should be sent as additional input to model.\n 5. The final model response should be sent to user from assistant.\n \nTODO:\n- Possible \"os\" column to specify which system the command should be used with, maybe separate datasets for each system type.## Sample prompt: (in series, depends on your specific model prompt)"
] |
00c070f7eb9458e72385caff7aab1a14ea6b5c04 |
3.5k hallucinated biographies for inducing hallucinations for contrast in our ICD method on Factscore. More details are provided in our [Github](https://github.com/hillzhang1999/ICD) and [Paper](). | HillZhang/hallu_bio_3.5k | [
"task_categories:text-generation",
"size_categories:1K<n<10K",
"language:en",
"license:mit",
"region:us"
] | 2023-12-22T06:38:05+00:00 | {"language": ["en"], "license": "mit", "size_categories": ["1K<n<10K"], "task_categories": ["text-generation"]} | 2023-12-22T06:45:50+00:00 | [] | [
"en"
] | TAGS
#task_categories-text-generation #size_categories-1K<n<10K #language-English #license-mit #region-us
|
3.5k hallucinated biographies for inducing hallucinations for contrast in our ICD method on Factscore. More details are provided in our Github and [Paper](). | [] | [
"TAGS\n#task_categories-text-generation #size_categories-1K<n<10K #language-English #license-mit #region-us \n"
] | [
38
] | [
"passage: TAGS\n#task_categories-text-generation #size_categories-1K<n<10K #language-English #license-mit #region-us \n"
] |
a6488f612ad1d0f0248b97e62c230ae851a76067 | qingcha/qingchadataset666 --- test | qingcha/qingchadataset666 | [
"license:apache-2.0",
"region:us"
] | 2023-12-22T06:52:08+00:00 | {"license": "apache-2.0"} | 2023-12-22T06:52:29+00:00 | [] | [] | TAGS
#license-apache-2.0 #region-us
| qingcha/qingchadataset666 --- test | [] | [
"TAGS\n#license-apache-2.0 #region-us \n"
] | [
14
] | [
"passage: TAGS\n#license-apache-2.0 #region-us \n"
] |
7da3b78e95c6dc23344fa8426b84305b5af12ca1 | # Dataset Card for "shp_dpo_binarized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jan-hq/shp_dpo_binarized | [
"region:us"
] | 2023-12-22T07:45:40+00:00 | {"dataset_info": {"features": [{"name": "chosen", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}, {"name": "rejected", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 996005385, "num_examples": 367154}, {"name": "test", "num_bytes": 50042020, "num_examples": 18409}], "download_size": 145533201, "dataset_size": 1046047405}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]} | 2023-12-22T07:46:04+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "shp_dpo_binarized"
More Information needed | [
"# Dataset Card for \"shp_dpo_binarized\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"shp_dpo_binarized\"\n\nMore Information needed"
] | [
6,
19
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"shp_dpo_binarized\"\n\nMore Information needed"
] |
f684c0337f55492d2594df4cbfeb428b6a290a8e | # Dataset Card for "limarp_binarized"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | jan-hq/limarp_binarized | [
"region:us"
] | 2023-12-22T08:28:29+00:00 | {"dataset_info": {"features": [{"name": "messages", "list": [{"name": "content", "dtype": "string"}, {"name": "role", "dtype": "string"}]}], "splits": [{"name": "train", "num_bytes": 15393265, "num_examples": 648}], "download_size": 9062945, "dataset_size": 15393265}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2023-12-22T08:28:36+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "limarp_binarized"
More Information needed | [
"# Dataset Card for \"limarp_binarized\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"limarp_binarized\"\n\nMore Information needed"
] | [
6,
16
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"limarp_binarized\"\n\nMore Information needed"
] |
125c9f2186b5403011ebf2f3cf00cad9fc1500ec | # LyricCanvas Dataset
- The lyricCanvas dataset contains approximately 10M lines of lyrics with corresponding visual elaborations (visualizable prompts).
- It could be used to train large language models to translate highly abstract concepts and metaphorical
phrases to visualizable prompts for image generation, see [ViPE](https://huggingface.co/fittar/ViPE-M-CTX7).
- Due to copyright policies, we are not allowed to publish the lyrics, however, we release the visual elaborations and the scraper through which
you can collect the lyrics and rebuild LyricCanvas with no additional cost.
## Compiling LyricCanvas
- Download the lyric_canvas.csv file on this repository
- Follow the steps laid out [here](https://github.com/Hazel1994/ViPE/tree/main/lyric_canvas) to complete the dataset
- Enjoy!
## Citation
If you found LyricCanvas useful, please consider citing
```
@inproceedings{shahmohammadi-etal-2023-vipe,
title = "{V}i{PE}: Visualise Pretty-much Everything",
author = "Shahmohammadi, Hassan and
Ghosh, Adhiraj and
Lensch, Hendrik",
editor = "Bouamor, Houda and
Pino, Juan and
Bali, Kalika",
booktitle = "Proceedings of the 2023 Conference on Empirical Methods in Natural Language Processing",
month = dec,
year = "2023",
address = "Singapore",
publisher = "Association for Computational Linguistics",
url = "https://aclanthology.org/2023.emnlp-main.333",
pages = "5477--5494"
}
``` | fittar/lyric_canvas | [
"license:mit",
"region:us"
] | 2023-12-22T09:11:59+00:00 | {"license": "mit"} | 2023-12-22T09:22:03+00:00 | [] | [] | TAGS
#license-mit #region-us
| # LyricCanvas Dataset
- The lyricCanvas dataset contains approximately 10M lines of lyrics with corresponding visual elaborations (visualizable prompts).
- It could be used to train large language models to translate highly abstract concepts and metaphorical
phrases to visualizable prompts for image generation, see ViPE.
- Due to copyright policies, we are not allowed to publish the lyrics, however, we release the visual elaborations and the scraper through which
you can collect the lyrics and rebuild LyricCanvas with no additional cost.
## Compiling LyricCanvas
- Download the lyric_canvas.csv file on this repository
- Follow the steps laid out here to complete the dataset
- Enjoy!
If you found LyricCanvas useful, please consider citing
| [
"# LyricCanvas Dataset\n- The lyricCanvas dataset contains approximately 10M lines of lyrics with corresponding visual elaborations (visualizable prompts).\n- It could be used to train large language models to translate highly abstract concepts and metaphorical\nphrases to visualizable prompts for image generation, see ViPE.\n- Due to copyright policies, we are not allowed to publish the lyrics, however, we release the visual elaborations and the scraper through which\nyou can collect the lyrics and rebuild LyricCanvas with no additional cost.",
"## Compiling LyricCanvas\n- Download the lyric_canvas.csv file on this repository\n- Follow the steps laid out here to complete the dataset\n- Enjoy!\n\nIf you found LyricCanvas useful, please consider citing"
] | [
"TAGS\n#license-mit #region-us \n",
"# LyricCanvas Dataset\n- The lyricCanvas dataset contains approximately 10M lines of lyrics with corresponding visual elaborations (visualizable prompts).\n- It could be used to train large language models to translate highly abstract concepts and metaphorical\nphrases to visualizable prompts for image generation, see ViPE.\n- Due to copyright policies, we are not allowed to publish the lyrics, however, we release the visual elaborations and the scraper through which\nyou can collect the lyrics and rebuild LyricCanvas with no additional cost.",
"## Compiling LyricCanvas\n- Download the lyric_canvas.csv file on this repository\n- Follow the steps laid out here to complete the dataset\n- Enjoy!\n\nIf you found LyricCanvas useful, please consider citing"
] | [
11,
119,
51
] | [
"passage: TAGS\n#license-mit #region-us \n# LyricCanvas Dataset\n- The lyricCanvas dataset contains approximately 10M lines of lyrics with corresponding visual elaborations (visualizable prompts).\n- It could be used to train large language models to translate highly abstract concepts and metaphorical\nphrases to visualizable prompts for image generation, see ViPE.\n- Due to copyright policies, we are not allowed to publish the lyrics, however, we release the visual elaborations and the scraper through which\nyou can collect the lyrics and rebuild LyricCanvas with no additional cost.## Compiling LyricCanvas\n- Download the lyric_canvas.csv file on this repository\n- Follow the steps laid out here to complete the dataset\n- Enjoy!\n\nIf you found LyricCanvas useful, please consider citing"
] |
2e00fbca5ff0f41f0820a1cb5d5d8c41fd209aaf | всем привееет | saemari/arctic_lab_news | [
"region:us"
] | 2023-12-22T10:09:42+00:00 | {} | 2023-12-22T10:13:20+00:00 | [] | [] | TAGS
#region-us
| всем привееет | [] | [
"TAGS\n#region-us \n"
] | [
6
] | [
"passage: TAGS\n#region-us \n"
] |
357326b6695ed1360ceaaa7888d823e5626e3dd8 |
# Dataset Summary
The FREDSum dataset is a comprehensive collection of transcripts and metadata from various political and public debates in France. The dataset aims to provide researchers, linguists, and data scientists with a rich source of debate content for analysis and natural language processing tasks.
## Languages
French
# Dataset Structure
The dataset is made of 144 debates, 115 of the debates make up the train set, while 29 make up the test set
## Data Fields
- id : Unique ID of an exemple
- Transcript : The text of the debate
- Abstractive_1-3 : Human summary of the debate. Abstractive summary style goes from least to most Abstractive - Abstractive 1 keeps names to avoid coreference resolution, while Abstractive 3 is free form
- Extractive_1-2 : Human selection of important utterances from the source debate
## Data splits
- train : 115
- test : 29
# Licensing Information
non-commercial licence: CC BY-SA 4.0
# Citation Information
If you use this dataset, please cite the following article:
Virgile Rennard, Guokan Shang, Damien Grari, Julie Hunter, and Michalis Vazirgiannis. 2023. FREDSum: A Dialogue Summarization Corpus for French Political Debates. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 4241–4253, Singapore. Association for Computational Linguistics. | linagora/FREDSum | [
"task_categories:summarization",
"size_categories:n<1K",
"language:fr",
"license:cc-by-sa-4.0",
"NLP",
"Debates",
"Abstractive_Summarization",
"Extractive_Summarization",
"French",
"region:us"
] | 2023-12-22T10:10:28+00:00 | {"language": ["fr"], "license": "cc-by-sa-4.0", "size_categories": ["n<1K"], "task_categories": ["summarization"], "pretty_name": "FREDsum", "tags": ["NLP", "Debates", "Abstractive_Summarization", "Extractive_Summarization", "French"]} | 2024-01-02T09:56:40+00:00 | [] | [
"fr"
] | TAGS
#task_categories-summarization #size_categories-n<1K #language-French #license-cc-by-sa-4.0 #NLP #Debates #Abstractive_Summarization #Extractive_Summarization #French #region-us
|
# Dataset Summary
The FREDSum dataset is a comprehensive collection of transcripts and metadata from various political and public debates in France. The dataset aims to provide researchers, linguists, and data scientists with a rich source of debate content for analysis and natural language processing tasks.
## Languages
French
# Dataset Structure
The dataset is made of 144 debates, 115 of the debates make up the train set, while 29 make up the test set
## Data Fields
- id : Unique ID of an exemple
- Transcript : The text of the debate
- Abstractive_1-3 : Human summary of the debate. Abstractive summary style goes from least to most Abstractive - Abstractive 1 keeps names to avoid coreference resolution, while Abstractive 3 is free form
- Extractive_1-2 : Human selection of important utterances from the source debate
## Data splits
- train : 115
- test : 29
# Licensing Information
non-commercial licence: CC BY-SA 4.0
If you use this dataset, please cite the following article:
Virgile Rennard, Guokan Shang, Damien Grari, Julie Hunter, and Michalis Vazirgiannis. 2023. FREDSum: A Dialogue Summarization Corpus for French Political Debates. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 4241–4253, Singapore. Association for Computational Linguistics. | [
"# Dataset Summary \n\nThe FREDSum dataset is a comprehensive collection of transcripts and metadata from various political and public debates in France. The dataset aims to provide researchers, linguists, and data scientists with a rich source of debate content for analysis and natural language processing tasks.",
"## Languages \n\nFrench",
"# Dataset Structure\n\nThe dataset is made of 144 debates, 115 of the debates make up the train set, while 29 make up the test set",
"## Data Fields\n\n- id : Unique ID of an exemple\n- Transcript : The text of the debate\n- Abstractive_1-3 : Human summary of the debate. Abstractive summary style goes from least to most Abstractive - Abstractive 1 keeps names to avoid coreference resolution, while Abstractive 3 is free form\n- Extractive_1-2 : Human selection of important utterances from the source debate",
"## Data splits\n\n- train : 115\n- test : 29",
"# Licensing Information\n\nnon-commercial licence: CC BY-SA 4.0\n\n\n\nIf you use this dataset, please cite the following article:\n\n Virgile Rennard, Guokan Shang, Damien Grari, Julie Hunter, and Michalis Vazirgiannis. 2023. FREDSum: A Dialogue Summarization Corpus for French Political Debates. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 4241–4253, Singapore. Association for Computational Linguistics."
] | [
"TAGS\n#task_categories-summarization #size_categories-n<1K #language-French #license-cc-by-sa-4.0 #NLP #Debates #Abstractive_Summarization #Extractive_Summarization #French #region-us \n",
"# Dataset Summary \n\nThe FREDSum dataset is a comprehensive collection of transcripts and metadata from various political and public debates in France. The dataset aims to provide researchers, linguists, and data scientists with a rich source of debate content for analysis and natural language processing tasks.",
"## Languages \n\nFrench",
"# Dataset Structure\n\nThe dataset is made of 144 debates, 115 of the debates make up the train set, while 29 make up the test set",
"## Data Fields\n\n- id : Unique ID of an exemple\n- Transcript : The text of the debate\n- Abstractive_1-3 : Human summary of the debate. Abstractive summary style goes from least to most Abstractive - Abstractive 1 keeps names to avoid coreference resolution, while Abstractive 3 is free form\n- Extractive_1-2 : Human selection of important utterances from the source debate",
"## Data splits\n\n- train : 115\n- test : 29",
"# Licensing Information\n\nnon-commercial licence: CC BY-SA 4.0\n\n\n\nIf you use this dataset, please cite the following article:\n\n Virgile Rennard, Guokan Shang, Damien Grari, Julie Hunter, and Michalis Vazirgiannis. 2023. FREDSum: A Dialogue Summarization Corpus for French Political Debates. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 4241–4253, Singapore. Association for Computational Linguistics."
] | [
72,
65,
4,
34,
83,
12,
116
] | [
"passage: TAGS\n#task_categories-summarization #size_categories-n<1K #language-French #license-cc-by-sa-4.0 #NLP #Debates #Abstractive_Summarization #Extractive_Summarization #French #region-us \n# Dataset Summary \n\nThe FREDSum dataset is a comprehensive collection of transcripts and metadata from various political and public debates in France. The dataset aims to provide researchers, linguists, and data scientists with a rich source of debate content for analysis and natural language processing tasks.## Languages \n\nFrench# Dataset Structure\n\nThe dataset is made of 144 debates, 115 of the debates make up the train set, while 29 make up the test set## Data Fields\n\n- id : Unique ID of an exemple\n- Transcript : The text of the debate\n- Abstractive_1-3 : Human summary of the debate. Abstractive summary style goes from least to most Abstractive - Abstractive 1 keeps names to avoid coreference resolution, while Abstractive 3 is free form\n- Extractive_1-2 : Human selection of important utterances from the source debate## Data splits\n\n- train : 115\n- test : 29# Licensing Information\n\nnon-commercial licence: CC BY-SA 4.0\n\n\n\nIf you use this dataset, please cite the following article:\n\n Virgile Rennard, Guokan Shang, Damien Grari, Julie Hunter, and Michalis Vazirgiannis. 2023. FREDSum: A Dialogue Summarization Corpus for French Political Debates. In Findings of the Association for Computational Linguistics: EMNLP 2023, pages 4241–4253, Singapore. Association for Computational Linguistics."
] |
3c5665577318fc6398ac9e350bd2fab7b87f59d7 |
# NLP Taxonomy Classification Data
The dataset consists of titles and abstracts from NLP-related papers. Each paper is annotated with multiple fields of study from the [NLP taxonomy](#nlp-taxonomy). Each sample is annotated with all possible lower-level concepts and their hypernyms in the [NLP taxonomy](#nlp-taxonomy). The training dataset contains 178,521 weakly annotated samples. The test dataset consists of 828 manually annotated samples from the EMNLP22 conference. The manually labeled test dataset might not contain all possible classes since it consists of EMNLP22 papers only, and some rarer classes haven’t been published there. Therefore, we advise creating an additional test or validation set from the train data that includes all the possible classes.
📄 Paper: [Exploring the Landscape of Natural Language Processing Research (RANLP 2023)](https://aclanthology.org/2023.ranlp-1.111)
💻 Code: [https://github.com/sebischair/Exploring-NLP-Research](https://github.com/sebischair/Exploring-NLP-Research)
🤗 Model: [https://huggingface.co/TimSchopf/nlp_taxonomy_classifier](https://huggingface.co/TimSchopf/nlp_taxonomy_classifier)
<a name="#nlp-taxonomy"/></a>
## NLP Taxonomy

## Citation information
When citing our work in academic papers and theses, please use this BibTeX entry:
```
@inproceedings{schopf-etal-2023-exploring,
title = "Exploring the Landscape of Natural Language Processing Research",
author = "Schopf, Tim and
Arabi, Karim and
Matthes, Florian",
editor = "Mitkov, Ruslan and
Angelova, Galia",
booktitle = "Proceedings of the 14th International Conference on Recent Advances in Natural Language Processing",
month = sep,
year = "2023",
address = "Varna, Bulgaria",
publisher = "INCOMA Ltd., Shoumen, Bulgaria",
url = "https://aclanthology.org/2023.ranlp-1.111",
pages = "1034--1045",
abstract = "As an efficient approach to understand, generate, and process natural language texts, research in natural language processing (NLP) has exhibited a rapid spread and wide adoption in recent years. Given the increasing research work in this area, several NLP-related approaches have been surveyed in the research community. However, a comprehensive study that categorizes established topics, identifies trends, and outlines areas for future research remains absent. Contributing to closing this gap, we have systematically classified and analyzed research papers in the ACL Anthology. As a result, we present a structured overview of the research landscape, provide a taxonomy of fields of study in NLP, analyze recent developments in NLP, summarize our findings, and highlight directions for future work.",
}
``` | TimSchopf/nlp_taxonomy_data | [
"license:mit",
"region:us"
] | 2023-12-22T10:14:06+00:00 | {"license": "mit", "dataset_info": {"features": [{"name": "id", "dtype": "string"}, {"name": "title", "dtype": "string"}, {"name": "abstract", "dtype": "string"}, {"name": "classification_labels", "sequence": "string"}, {"name": "numerical_classification_labels", "sequence": "int64"}], "splits": [{"name": "train", "num_bytes": 235500446, "num_examples": 178521}, {"name": "test", "num_bytes": 1175810, "num_examples": 828}], "download_size": 116387254, "dataset_size": 236676256}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}, {"split": "test", "path": "data/test-*"}]}]} | 2023-12-23T10:48:24+00:00 | [] | [] | TAGS
#license-mit #region-us
|
# NLP Taxonomy Classification Data
The dataset consists of titles and abstracts from NLP-related papers. Each paper is annotated with multiple fields of study from the NLP taxonomy. Each sample is annotated with all possible lower-level concepts and their hypernyms in the NLP taxonomy. The training dataset contains 178,521 weakly annotated samples. The test dataset consists of 828 manually annotated samples from the EMNLP22 conference. The manually labeled test dataset might not contain all possible classes since it consists of EMNLP22 papers only, and some rarer classes haven’t been published there. Therefore, we advise creating an additional test or validation set from the train data that includes all the possible classes.
Paper: Exploring the Landscape of Natural Language Processing Research (RANLP 2023)
Code: URL
Model: URL
<a name="#nlp-taxonomy"/></a>
## NLP Taxonomy
!NLP taxonomy
information
When citing our work in academic papers and theses, please use this BibTeX entry:
| [
"# NLP Taxonomy Classification Data\n\nThe dataset consists of titles and abstracts from NLP-related papers. Each paper is annotated with multiple fields of study from the NLP taxonomy. Each sample is annotated with all possible lower-level concepts and their hypernyms in the NLP taxonomy. The training dataset contains 178,521 weakly annotated samples. The test dataset consists of 828 manually annotated samples from the EMNLP22 conference. The manually labeled test dataset might not contain all possible classes since it consists of EMNLP22 papers only, and some rarer classes haven’t been published there. Therefore, we advise creating an additional test or validation set from the train data that includes all the possible classes.\n\n Paper: Exploring the Landscape of Natural Language Processing Research (RANLP 2023)\n\n Code: URL\n\n Model: URL\n\n<a name=\"#nlp-taxonomy\"/></a>",
"## NLP Taxonomy\n\n!NLP taxonomy\n\ninformation\nWhen citing our work in academic papers and theses, please use this BibTeX entry:"
] | [
"TAGS\n#license-mit #region-us \n",
"# NLP Taxonomy Classification Data\n\nThe dataset consists of titles and abstracts from NLP-related papers. Each paper is annotated with multiple fields of study from the NLP taxonomy. Each sample is annotated with all possible lower-level concepts and their hypernyms in the NLP taxonomy. The training dataset contains 178,521 weakly annotated samples. The test dataset consists of 828 manually annotated samples from the EMNLP22 conference. The manually labeled test dataset might not contain all possible classes since it consists of EMNLP22 papers only, and some rarer classes haven’t been published there. Therefore, we advise creating an additional test or validation set from the train data that includes all the possible classes.\n\n Paper: Exploring the Landscape of Natural Language Processing Research (RANLP 2023)\n\n Code: URL\n\n Model: URL\n\n<a name=\"#nlp-taxonomy\"/></a>",
"## NLP Taxonomy\n\n!NLP taxonomy\n\ninformation\nWhen citing our work in academic papers and theses, please use this BibTeX entry:"
] | [
11,
219,
35
] | [
"passage: TAGS\n#license-mit #region-us \n# NLP Taxonomy Classification Data\n\nThe dataset consists of titles and abstracts from NLP-related papers. Each paper is annotated with multiple fields of study from the NLP taxonomy. Each sample is annotated with all possible lower-level concepts and their hypernyms in the NLP taxonomy. The training dataset contains 178,521 weakly annotated samples. The test dataset consists of 828 manually annotated samples from the EMNLP22 conference. The manually labeled test dataset might not contain all possible classes since it consists of EMNLP22 papers only, and some rarer classes haven’t been published there. Therefore, we advise creating an additional test or validation set from the train data that includes all the possible classes.\n\n Paper: Exploring the Landscape of Natural Language Processing Research (RANLP 2023)\n\n Code: URL\n\n Model: URL\n\n<a name=\"#nlp-taxonomy\"/></a>## NLP Taxonomy\n\n!NLP taxonomy\n\ninformation\nWhen citing our work in academic papers and theses, please use this BibTeX entry:"
] |
354b3a78d110c4b73a54bd37a49b7b7658cbf1d0 | # Dataset Card for "helpfulness_data"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | EuriskoMobility/helpfulness_data | [
"region:us"
] | 2023-12-22T10:19:44+00:00 | {"configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}], "dataset_info": {"features": [{"name": "assistant", "dtype": "string"}, {"name": "helpfulness", "dtype": "float64"}, {"name": "prompter", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 51629240, "num_examples": 55668}], "download_size": 26107368, "dataset_size": 51629240}} | 2023-12-22T10:20:07+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "helpfulness_data"
More Information needed | [
"# Dataset Card for \"helpfulness_data\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"helpfulness_data\"\n\nMore Information needed"
] | [
6,
15
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"helpfulness_data\"\n\nMore Information needed"
] |
cdc8ff6f72b9c90e2297d07f0b01ea66b88c9d2f | # Dataset Card for "fialka-v2-dpo"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | 0x7o/fialka-v2-dpo | [
"region:us"
] | 2023-12-22T10:29:59+00:00 | {"dataset_info": {"features": [{"name": "prompt", "dtype": "string"}, {"name": "chosen", "dtype": "string"}, {"name": "rejected", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 6976884, "num_examples": 2470}], "download_size": 2419126, "dataset_size": 6976884}} | 2023-12-22T10:30:09+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "fialka-v2-dpo"
More Information needed | [
"# Dataset Card for \"fialka-v2-dpo\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"fialka-v2-dpo\"\n\nMore Information needed"
] | [
6,
17
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"fialka-v2-dpo\"\n\nMore Information needed"
] |
8eea6e01f31913788bdd20ea8ffcff4d1541a761 |
# Dataset Card for IFEval
<!-- Provide a quick summary of the dataset. -->
This dataset contains the prompts used in Google's [Instruction-Following Evaluation for Large Language Models](https://arxiv.org/abs/2311.07911). You can find the original source data and evaluation code [here](https://github.com/google-research/google-research/tree/master/instruction_following_eval).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | HuggingFaceH4/ifeval | [
"license:apache-2.0",
"arxiv:2311.07911",
"region:us"
] | 2023-12-22T11:20:03+00:00 | {"license": "apache-2.0"} | 2023-12-22T11:22:51+00:00 | [
"2311.07911"
] | [] | TAGS
#license-apache-2.0 #arxiv-2311.07911 #region-us
|
# Dataset Card for IFEval
This dataset contains the prompts used in Google's Instruction-Following Evaluation for Large Language Models. You can find the original source data and evaluation code here.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for IFEval\n\n\n\nThis dataset contains the prompts used in Google's Instruction-Following Evaluation for Large Language Models. You can find the original source data and evaluation code here.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#license-apache-2.0 #arxiv-2311.07911 #region-us \n",
"# Dataset Card for IFEval\n\n\n\nThis dataset contains the prompts used in Google's Instruction-Following Evaluation for Large Language Models. You can find the original source data and evaluation code here.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
23,
47,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#license-apache-2.0 #arxiv-2311.07911 #region-us \n# Dataset Card for IFEval\n\n\n\nThis dataset contains the prompts used in Google's Instruction-Following Evaluation for Large Language Models. You can find the original source data and evaluation code here.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
d616fc0dc3edf4dbb44f9d88725e317116063825 |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | juosilva/refinedTasksMin | [
"region:us"
] | 2023-12-22T11:26:39+00:00 | {} | 2023-12-22T11:30:40+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
34,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
072ce6936a207d42b7b4a8126ae85818fb2220e2 |
# German DBMDZ BERT Corpus
This datasets includes all corpora that were used for pretraining the [German DBMDZ BERT Models](https://github.com/dbmdz/berts?tab=readme-ov-file#german-bert).
It consists of Wikipedia dump and corpora from [OPUS](https://opus.nlpl.eu/):
| Filename | Description | Creation Date | File Size |
| ------------------- | ------------------ | ------------ | --------- |
| `dewiki.txt` | Wikipedia Dump | May 2019 | 5.1GB |
| `eubookshop.txt` | OPUS EUbookshop | November 2018 | 2.2GB |
| `news.2018.txt` | OPUS News corpora | January 2019 | 4.1GB |
| `opensubtitles.txt` | OPUS OpenSubtitles | November 2018 | 1.3GB |
| `paracrawl.txt` | OPUS ParaCrawl | November 2018 | 3.1GB | | stefan-it/german-dbmdz-bert-corpus | [
"language:de",
"license:cc-by-sa-3.0",
"region:us"
] | 2023-12-22T11:30:11+00:00 | {"language": ["de"], "license": "cc-by-sa-3.0"} | 2023-12-22T15:22:38+00:00 | [] | [
"de"
] | TAGS
#language-German #license-cc-by-sa-3.0 #region-us
| German DBMDZ BERT Corpus
========================
This datasets includes all corpora that were used for pretraining the German DBMDZ BERT Models.
It consists of Wikipedia dump and corpora from OPUS:
| [] | [
"TAGS\n#language-German #license-cc-by-sa-3.0 #region-us \n"
] | [
21
] | [
"passage: TAGS\n#language-German #license-cc-by-sa-3.0 #region-us \n"
] |
487b938432215c1c8bf823e816699653401f4506 |
---
# Anekdots Dataset Summary
This dataset comprises a collection of humorous anecdotes ("anecdotes") gathered from the period of January 4, 1996, to December 4, 2023. The dataset has undergone a thorough cleaning and preparation process to ensure its suitability for model training purposes. Researchers and developers can leverage this curated dataset for various applications, such as natural language processing and machine learning.
---
# Dataset License Summary
This dataset is released under the Open Data Commons Attribution License (ODC-BY). The licensor does not claim copyright on the content and encourages wide use and distribution.
## Disclaimer
The dataset's author explicitly disclaims any rights to the content and assumes no responsibility for its usage. The dataset may contain materials from [anekdot.ru](https://www.anekdot.ru/), and users are encouraged to refer to the website for additional context.
## Warning
The administration of [anekdot.ru](https://www.anekdot.ru/) disclaims responsibility for submitted content, potential legal violations, or offensive nature. Rights to published materials belong to their respective owners, and the website administration is not liable for third-party use. The administration reserves the right to use information at its discretion and may remove user-submitted materials.
## Dataset Author Disclaimer
The dataset's author explicitly states no claim to content rights and is not responsible for its accuracy, legality, or appropriateness. Users are advised to exercise discretion and judgment when utilizing the dataset.
---
### Citation
```
@MISC{igorktech/anekdots,
author = {Igor Kuzmin},
title = {Russian anecdotes dump for 30 years},
url = {https://huggingface.co/datasets/igorktech/anekdots},
year = 2023
}
``` | igorktech/anekdots | [
"task_categories:text-generation",
"size_categories:100K<n<1M",
"language:ru",
"license:odc-by",
"not-for-all-audiences",
"roleplay",
"region:us"
] | 2023-12-22T11:50:08+00:00 | {"language": ["ru"], "license": "odc-by", "size_categories": ["100K<n<1M"], "task_categories": ["text-generation"], "pretty_name": "Anekdots", "tags": ["not-for-all-audiences", "roleplay"], "dataset_info": {"features": [{"name": "total_mark", "dtype": "int64"}, {"name": "date", "dtype": "int64"}, {"name": "downvote", "dtype": "int64"}, {"name": "total_votes", "dtype": "string"}, {"name": "upvote", "dtype": "int64"}, {"name": "text", "dtype": "string"}, {"name": "hash", "dtype": "string"}, {"name": "alpha_frac", "dtype": "float64"}, {"name": "LDR", "dtype": "float64"}, {"name": "days_since_publication", "dtype": "int64"}, {"name": "time_decay", "dtype": "float64"}, {"name": "LDR_time_decay", "dtype": "float64"}], "splits": [{"name": "train", "num_bytes": 209320893, "num_examples": 497596}], "download_size": 121676024, "dataset_size": 209320893}, "configs": [{"config_name": "default", "data_files": [{"split": "train", "path": "data/train-*"}]}]} | 2024-01-22T15:14:35+00:00 | [] | [
"ru"
] | TAGS
#task_categories-text-generation #size_categories-100K<n<1M #language-Russian #license-odc-by #not-for-all-audiences #roleplay #region-us
|
---
# Anekdots Dataset Summary
This dataset comprises a collection of humorous anecdotes ("anecdotes") gathered from the period of January 4, 1996, to December 4, 2023. The dataset has undergone a thorough cleaning and preparation process to ensure its suitability for model training purposes. Researchers and developers can leverage this curated dataset for various applications, such as natural language processing and machine learning.
---
# Dataset License Summary
This dataset is released under the Open Data Commons Attribution License (ODC-BY). The licensor does not claim copyright on the content and encourages wide use and distribution.
## Disclaimer
The dataset's author explicitly disclaims any rights to the content and assumes no responsibility for its usage. The dataset may contain materials from URL, and users are encouraged to refer to the website for additional context.
## Warning
The administration of URL disclaims responsibility for submitted content, potential legal violations, or offensive nature. Rights to published materials belong to their respective owners, and the website administration is not liable for third-party use. The administration reserves the right to use information at its discretion and may remove user-submitted materials.
## Dataset Author Disclaimer
The dataset's author explicitly states no claim to content rights and is not responsible for its accuracy, legality, or appropriateness. Users are advised to exercise discretion and judgment when utilizing the dataset.
---
| [
"# Anekdots Dataset Summary\n\nThis dataset comprises a collection of humorous anecdotes (\"anecdotes\") gathered from the period of January 4, 1996, to December 4, 2023. The dataset has undergone a thorough cleaning and preparation process to ensure its suitability for model training purposes. Researchers and developers can leverage this curated dataset for various applications, such as natural language processing and machine learning.\n\n---",
"# Dataset License Summary\n\nThis dataset is released under the Open Data Commons Attribution License (ODC-BY). The licensor does not claim copyright on the content and encourages wide use and distribution.",
"## Disclaimer\nThe dataset's author explicitly disclaims any rights to the content and assumes no responsibility for its usage. The dataset may contain materials from URL, and users are encouraged to refer to the website for additional context.",
"## Warning\nThe administration of URL disclaims responsibility for submitted content, potential legal violations, or offensive nature. Rights to published materials belong to their respective owners, and the website administration is not liable for third-party use. The administration reserves the right to use information at its discretion and may remove user-submitted materials.",
"## Dataset Author Disclaimer\nThe dataset's author explicitly states no claim to content rights and is not responsible for its accuracy, legality, or appropriateness. Users are advised to exercise discretion and judgment when utilizing the dataset.\n\n---"
] | [
"TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-Russian #license-odc-by #not-for-all-audiences #roleplay #region-us \n",
"# Anekdots Dataset Summary\n\nThis dataset comprises a collection of humorous anecdotes (\"anecdotes\") gathered from the period of January 4, 1996, to December 4, 2023. The dataset has undergone a thorough cleaning and preparation process to ensure its suitability for model training purposes. Researchers and developers can leverage this curated dataset for various applications, such as natural language processing and machine learning.\n\n---",
"# Dataset License Summary\n\nThis dataset is released under the Open Data Commons Attribution License (ODC-BY). The licensor does not claim copyright on the content and encourages wide use and distribution.",
"## Disclaimer\nThe dataset's author explicitly disclaims any rights to the content and assumes no responsibility for its usage. The dataset may contain materials from URL, and users are encouraged to refer to the website for additional context.",
"## Warning\nThe administration of URL disclaims responsibility for submitted content, potential legal violations, or offensive nature. Rights to published materials belong to their respective owners, and the website administration is not liable for third-party use. The administration reserves the right to use information at its discretion and may remove user-submitted materials.",
"## Dataset Author Disclaimer\nThe dataset's author explicitly states no claim to content rights and is not responsible for its accuracy, legality, or appropriateness. Users are advised to exercise discretion and judgment when utilizing the dataset.\n\n---"
] | [
55,
99,
42,
50,
73,
55
] | [
"passage: TAGS\n#task_categories-text-generation #size_categories-100K<n<1M #language-Russian #license-odc-by #not-for-all-audiences #roleplay #region-us \n# Anekdots Dataset Summary\n\nThis dataset comprises a collection of humorous anecdotes (\"anecdotes\") gathered from the period of January 4, 1996, to December 4, 2023. The dataset has undergone a thorough cleaning and preparation process to ensure its suitability for model training purposes. Researchers and developers can leverage this curated dataset for various applications, such as natural language processing and machine learning.\n\n---# Dataset License Summary\n\nThis dataset is released under the Open Data Commons Attribution License (ODC-BY). The licensor does not claim copyright on the content and encourages wide use and distribution.## Disclaimer\nThe dataset's author explicitly disclaims any rights to the content and assumes no responsibility for its usage. The dataset may contain materials from URL, and users are encouraged to refer to the website for additional context.## Warning\nThe administration of URL disclaims responsibility for submitted content, potential legal violations, or offensive nature. Rights to published materials belong to their respective owners, and the website administration is not liable for third-party use. The administration reserves the right to use information at its discretion and may remove user-submitted materials.## Dataset Author Disclaimer\nThe dataset's author explicitly states no claim to content rights and is not responsible for its accuracy, legality, or appropriateness. Users are advised to exercise discretion and judgment when utilizing the dataset.\n\n---"
] |
a1d486a4394887ae3165b2d5a498eb5fba0653ea |
Homepage: https://github.com/GGLAB-KU/turkish-plu
| mcemilg/turkish-plu-goal-inference | [
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:tr",
"region:us"
] | 2023-12-22T11:55:10+00:00 | {"language": ["tr"], "size_categories": ["100K<n<1M"], "task_categories": ["text-classification"]} | 2023-12-22T12:12:45+00:00 | [] | [
"tr"
] | TAGS
#task_categories-text-classification #size_categories-100K<n<1M #language-Turkish #region-us
|
Homepage: URL
| [] | [
"TAGS\n#task_categories-text-classification #size_categories-100K<n<1M #language-Turkish #region-us \n"
] | [
35
] | [
"passage: TAGS\n#task_categories-text-classification #size_categories-100K<n<1M #language-Turkish #region-us \n"
] |
1b81de117b86e5f4e1c282d7a8a67cb16b3791f5 | # Dataset Card for "PHYSIGENAI-phy-small"
[More Information needed](https://github.com/huggingface/datasets/blob/main/CONTRIBUTING.md#how-to-contribute-to-the-dataset-cards) | ashu3984/PHYSIGENAI-phy-small | [
"region:us"
] | 2023-12-22T12:05:42+00:00 | {"dataset_info": {"features": [{"name": "instruction", "dtype": "string"}, {"name": "input", "dtype": "string"}, {"name": "output", "dtype": "string"}], "splits": [{"name": "train", "num_bytes": 346786, "num_examples": 785}], "download_size": 112107, "dataset_size": 346786}} | 2023-12-22T12:28:04+00:00 | [] | [] | TAGS
#region-us
| # Dataset Card for "PHYSIGENAI-phy-small"
More Information needed | [
"# Dataset Card for \"PHYSIGENAI-phy-small\"\n\nMore Information needed"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for \"PHYSIGENAI-phy-small\"\n\nMore Information needed"
] | [
6,
19
] | [
"passage: TAGS\n#region-us \n# Dataset Card for \"PHYSIGENAI-phy-small\"\n\nMore Information needed"
] |
9cce0eb810441da4281ee8800ad6dcf65a207189 |
# Dataset Card for Dataset Name
<!-- Provide a quick summary of the dataset. -->
This dataset card aims to be a base template for new datasets. It has been generated using [this raw template](https://github.com/huggingface/huggingface_hub/blob/main/src/huggingface_hub/templates/datasetcard_template.md?plain=1).
## Dataset Details
### Dataset Description
<!-- Provide a longer summary of what this dataset is. -->
- **Curated by:** [More Information Needed]
- **Funded by [optional]:** [More Information Needed]
- **Shared by [optional]:** [More Information Needed]
- **Language(s) (NLP):** [More Information Needed]
- **License:** [More Information Needed]
### Dataset Sources [optional]
<!-- Provide the basic links for the dataset. -->
- **Repository:** [More Information Needed]
- **Paper [optional]:** [More Information Needed]
- **Demo [optional]:** [More Information Needed]
## Uses
<!-- Address questions around how the dataset is intended to be used. -->
### Direct Use
<!-- This section describes suitable use cases for the dataset. -->
[More Information Needed]
### Out-of-Scope Use
<!-- This section addresses misuse, malicious use, and uses that the dataset will not work well for. -->
[More Information Needed]
## Dataset Structure
<!-- This section provides a description of the dataset fields, and additional information about the dataset structure such as criteria used to create the splits, relationships between data points, etc. -->
[More Information Needed]
## Dataset Creation
### Curation Rationale
<!-- Motivation for the creation of this dataset. -->
[More Information Needed]
### Source Data
<!-- This section describes the source data (e.g. news text and headlines, social media posts, translated sentences, ...). -->
#### Data Collection and Processing
<!-- This section describes the data collection and processing process such as data selection criteria, filtering and normalization methods, tools and libraries used, etc. -->
[More Information Needed]
#### Who are the source data producers?
<!-- This section describes the people or systems who originally created the data. It should also include self-reported demographic or identity information for the source data creators if this information is available. -->
[More Information Needed]
### Annotations [optional]
<!-- If the dataset contains annotations which are not part of the initial data collection, use this section to describe them. -->
#### Annotation process
<!-- This section describes the annotation process such as annotation tools used in the process, the amount of data annotated, annotation guidelines provided to the annotators, interannotator statistics, annotation validation, etc. -->
[More Information Needed]
#### Who are the annotators?
<!-- This section describes the people or systems who created the annotations. -->
[More Information Needed]
#### Personal and Sensitive Information
<!-- State whether the dataset contains data that might be considered personal, sensitive, or private (e.g., data that reveals addresses, uniquely identifiable names or aliases, racial or ethnic origins, sexual orientations, religious beliefs, political opinions, financial or health data, etc.). If efforts were made to anonymize the data, describe the anonymization process. -->
[More Information Needed]
## Bias, Risks, and Limitations
<!-- This section is meant to convey both technical and sociotechnical limitations. -->
[More Information Needed]
### Recommendations
<!-- This section is meant to convey recommendations with respect to the bias, risk, and technical limitations. -->
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
## Citation [optional]
<!-- If there is a paper or blog post introducing the dataset, the APA and Bibtex information for that should go in this section. -->
**BibTeX:**
[More Information Needed]
**APA:**
[More Information Needed]
## Glossary [optional]
<!-- If relevant, include terms and calculations in this section that can help readers understand the dataset or dataset card. -->
[More Information Needed]
## More Information [optional]
[More Information Needed]
## Dataset Card Authors [optional]
[More Information Needed]
## Dataset Card Contact
[More Information Needed] | juosilva/refinedTasksMinPrompt | [
"region:us"
] | 2023-12-22T12:08:54+00:00 | {} | 2023-12-22T12:11:20+00:00 | [] | [] | TAGS
#region-us
|
# Dataset Card for Dataset Name
This dataset card aims to be a base template for new datasets. It has been generated using this raw template.
## Dataset Details
### Dataset Description
- Curated by:
- Funded by [optional]:
- Shared by [optional]:
- Language(s) (NLP):
- License:
### Dataset Sources [optional]
- Repository:
- Paper [optional]:
- Demo [optional]:
## Uses
### Direct Use
### Out-of-Scope Use
## Dataset Structure
## Dataset Creation
### Curation Rationale
### Source Data
#### Data Collection and Processing
#### Who are the source data producers?
### Annotations [optional]
#### Annotation process
#### Who are the annotators?
#### Personal and Sensitive Information
## Bias, Risks, and Limitations
### Recommendations
Users should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.
[optional]
BibTeX:
APA:
## Glossary [optional]
## More Information [optional]
## Dataset Card Authors [optional]
## Dataset Card Contact
| [
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
"TAGS\n#region-us \n",
"# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.",
"## Dataset Details",
"### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:",
"### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:",
"## Uses",
"### Direct Use",
"### Out-of-Scope Use",
"## Dataset Structure",
"## Dataset Creation",
"### Curation Rationale",
"### Source Data",
"#### Data Collection and Processing",
"#### Who are the source data producers?",
"### Annotations [optional]",
"#### Annotation process",
"#### Who are the annotators?",
"#### Personal and Sensitive Information",
"## Bias, Risks, and Limitations",
"### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:",
"## Glossary [optional]",
"## More Information [optional]",
"## Dataset Card Authors [optional]",
"## Dataset Card Contact"
] | [
6,
34,
4,
40,
29,
3,
4,
9,
6,
5,
7,
4,
7,
10,
9,
5,
9,
8,
10,
46,
8,
7,
10,
5
] | [
"passage: TAGS\n#region-us \n# Dataset Card for Dataset Name\n\n\n\nThis dataset card aims to be a base template for new datasets. It has been generated using this raw template.## Dataset Details### Dataset Description\n\n\n\n\n\n- Curated by: \n- Funded by [optional]: \n- Shared by [optional]: \n- Language(s) (NLP): \n- License:### Dataset Sources [optional]\n\n\n\n- Repository: \n- Paper [optional]: \n- Demo [optional]:## Uses### Direct Use### Out-of-Scope Use## Dataset Structure## Dataset Creation### Curation Rationale### Source Data#### Data Collection and Processing#### Who are the source data producers?### Annotations [optional]#### Annotation process#### Who are the annotators?#### Personal and Sensitive Information## Bias, Risks, and Limitations### Recommendations\n\n\n\nUsers should be made aware of the risks, biases and limitations of the dataset. More information needed for further recommendations.\n\n[optional]\n\n\n\nBibTeX:\n\n\n\nAPA:## Glossary [optional]## More Information [optional]## Dataset Card Authors [optional]## Dataset Card Contact"
] |
82bc5079656a52445db4e7fae8f590dd18ee590b |
# Crypto & Stock Tweets
## Overview
This dataset is a combination of publically available financial tweets.
## Datset Size
- Stock Tweets: 2,624,314
- Crypto Tweets: 5,748,725
- Bitcoin Tweets: 4,820,915
## Sources
This dataset is a combination of data from various reputable sources, each contributing a unique perspective on financial tweets:
- [Stock Market Tweets Data](https://ieee-dataport.org/open-access/stock-market-tweets-data): 923,673 rows of stock tweets
- [Stock Market Tweets](https://huggingface.co/datasets/mjw/stock_market_tweets): 1,700,641 rows of stock tweets
- [Crypto Tweets](https://www.kaggle.com/datasets/leoth9/crypto-tweets): 10,438 rows of cryptocurrency tweets
- [Influencers' Tweets In Cryptocurrency](https://data.mendeley.com/datasets/8fbdhh72gs/5): 16,512 rows of cryptocurrency tweets
- [Bitcoin Tweets](https://data.mendeley.com/datasets/x7yvshrnxy/1): 76,797 of bitcoin tweets
- [Bitcoin Tweets](https://www.kaggle.com/datasets/kaushiksuresh147/bitcoin-tweets): 4,863,751 rows of bitcoin tweets
- [Crypto Tweets](https://www.kaggle.com/datasets/tleonel/crypto-tweets-80k-in-eng-aug-2022): 80,000 rows of cryptocurrency tweets
- [Cryptocurreny Sentiment Tweets](https://www.kaggle.com/datasets/rezasemyari/crypto-sentiment-tweets): 824,908 rows of cryptocurrency tweets
- [Financial Tweets](https://huggingface.co/datasets/StephanAkkerman/financial-tweets): 263,119 rows of financial tweets
- [Cryptocurrency Tweets](https://github.com/am15h/CrypTop12): 576,836 rows of cryptocurrency tweets
## Usage
This dataset can be used for pre-training language models on financial tweets.
## Pre-processing Steps
Originally the combined datasets consist of 9,336,675 rows. However, there are some duplicates and not useful tweets in there. The dataset has been cleaned of `t.co` URLs, duplicate text, empty text, and tweets that end with '...'.
As a result the cleaned dataset consist of 8,024,269 rows, which is the one available here.
## Acknowledgements
We extend our heartfelt gratitude to all the authors and contributors of the original datasets. Their efforts in data collection and curation have been pivotal in creating this comprehensive resource.
## License
This dataset is made available under the CC-BY-4.0 license, adhering to the licensing terms of the original datasets. | StephanAkkerman/crypto-stock-tweets | [
"size_categories:1M<n<10M",
"language:en",
"license:cc-by-4.0",
"finance",
"twitter",
"tweets",
"crypto",
"stocks",
"region:us"
] | 2023-12-22T12:11:33+00:00 | {"language": ["en"], "license": "cc-by-4.0", "size_categories": ["1M<n<10M"], "pretty_name": "Crypto & Stock Tweets", "tags": ["finance", "twitter", "tweets", "crypto", "stocks"]} | 2023-12-22T15:53:26+00:00 | [] | [
"en"
] | TAGS
#size_categories-1M<n<10M #language-English #license-cc-by-4.0 #finance #twitter #tweets #crypto #stocks #region-us
|
# Crypto & Stock Tweets
## Overview
This dataset is a combination of publically available financial tweets.
## Datset Size
- Stock Tweets: 2,624,314
- Crypto Tweets: 5,748,725
- Bitcoin Tweets: 4,820,915
## Sources
This dataset is a combination of data from various reputable sources, each contributing a unique perspective on financial tweets:
- Stock Market Tweets Data: 923,673 rows of stock tweets
- Stock Market Tweets: 1,700,641 rows of stock tweets
- Crypto Tweets: 10,438 rows of cryptocurrency tweets
- Influencers' Tweets In Cryptocurrency: 16,512 rows of cryptocurrency tweets
- Bitcoin Tweets: 76,797 of bitcoin tweets
- Bitcoin Tweets: 4,863,751 rows of bitcoin tweets
- Crypto Tweets: 80,000 rows of cryptocurrency tweets
- Cryptocurreny Sentiment Tweets: 824,908 rows of cryptocurrency tweets
- Financial Tweets: 263,119 rows of financial tweets
- Cryptocurrency Tweets: 576,836 rows of cryptocurrency tweets
## Usage
This dataset can be used for pre-training language models on financial tweets.
## Pre-processing Steps
Originally the combined datasets consist of 9,336,675 rows. However, there are some duplicates and not useful tweets in there. The dataset has been cleaned of 't.co' URLs, duplicate text, empty text, and tweets that end with '...'.
As a result the cleaned dataset consist of 8,024,269 rows, which is the one available here.
## Acknowledgements
We extend our heartfelt gratitude to all the authors and contributors of the original datasets. Their efforts in data collection and curation have been pivotal in creating this comprehensive resource.
## License
This dataset is made available under the CC-BY-4.0 license, adhering to the licensing terms of the original datasets. | [
"# Crypto & Stock Tweets",
"## Overview\nThis dataset is a combination of publically available financial tweets.",
"## Datset Size\n- Stock Tweets: 2,624,314\n- Crypto Tweets: 5,748,725\n - Bitcoin Tweets: 4,820,915",
"## Sources\nThis dataset is a combination of data from various reputable sources, each contributing a unique perspective on financial tweets:\n- Stock Market Tweets Data: 923,673 rows of stock tweets\n- Stock Market Tweets: 1,700,641 rows of stock tweets\n- Crypto Tweets: 10,438 rows of cryptocurrency tweets\n- Influencers' Tweets In Cryptocurrency: 16,512 rows of cryptocurrency tweets\n- Bitcoin Tweets: 76,797 of bitcoin tweets\n- Bitcoin Tweets: 4,863,751 rows of bitcoin tweets\n- Crypto Tweets: 80,000 rows of cryptocurrency tweets\n- Cryptocurreny Sentiment Tweets: 824,908 rows of cryptocurrency tweets\n- Financial Tweets: 263,119 rows of financial tweets\n- Cryptocurrency Tweets: 576,836 rows of cryptocurrency tweets",
"## Usage\nThis dataset can be used for pre-training language models on financial tweets.",
"## Pre-processing Steps\nOriginally the combined datasets consist of 9,336,675 rows. However, there are some duplicates and not useful tweets in there. The dataset has been cleaned of 't.co' URLs, duplicate text, empty text, and tweets that end with '...'.\nAs a result the cleaned dataset consist of 8,024,269 rows, which is the one available here.",
"## Acknowledgements\n\nWe extend our heartfelt gratitude to all the authors and contributors of the original datasets. Their efforts in data collection and curation have been pivotal in creating this comprehensive resource.",
"## License\n\nThis dataset is made available under the CC-BY-4.0 license, adhering to the licensing terms of the original datasets."
] | [
"TAGS\n#size_categories-1M<n<10M #language-English #license-cc-by-4.0 #finance #twitter #tweets #crypto #stocks #region-us \n",
"# Crypto & Stock Tweets",
"## Overview\nThis dataset is a combination of publically available financial tweets.",
"## Datset Size\n- Stock Tweets: 2,624,314\n- Crypto Tweets: 5,748,725\n - Bitcoin Tweets: 4,820,915",
"## Sources\nThis dataset is a combination of data from various reputable sources, each contributing a unique perspective on financial tweets:\n- Stock Market Tweets Data: 923,673 rows of stock tweets\n- Stock Market Tweets: 1,700,641 rows of stock tweets\n- Crypto Tweets: 10,438 rows of cryptocurrency tweets\n- Influencers' Tweets In Cryptocurrency: 16,512 rows of cryptocurrency tweets\n- Bitcoin Tweets: 76,797 of bitcoin tweets\n- Bitcoin Tweets: 4,863,751 rows of bitcoin tweets\n- Crypto Tweets: 80,000 rows of cryptocurrency tweets\n- Cryptocurreny Sentiment Tweets: 824,908 rows of cryptocurrency tweets\n- Financial Tweets: 263,119 rows of financial tweets\n- Cryptocurrency Tweets: 576,836 rows of cryptocurrency tweets",
"## Usage\nThis dataset can be used for pre-training language models on financial tweets.",
"## Pre-processing Steps\nOriginally the combined datasets consist of 9,336,675 rows. However, there are some duplicates and not useful tweets in there. The dataset has been cleaned of 't.co' URLs, duplicate text, empty text, and tweets that end with '...'.\nAs a result the cleaned dataset consist of 8,024,269 rows, which is the one available here.",
"## Acknowledgements\n\nWe extend our heartfelt gratitude to all the authors and contributors of the original datasets. Their efforts in data collection and curation have been pivotal in creating this comprehensive resource.",
"## License\n\nThis dataset is made available under the CC-BY-4.0 license, adhering to the licensing terms of the original datasets."
] | [
45,
6,
17,
35,
201,
20,
101,
46,
33
] | [
"passage: TAGS\n#size_categories-1M<n<10M #language-English #license-cc-by-4.0 #finance #twitter #tweets #crypto #stocks #region-us \n# Crypto & Stock Tweets## Overview\nThis dataset is a combination of publically available financial tweets.## Datset Size\n- Stock Tweets: 2,624,314\n- Crypto Tweets: 5,748,725\n - Bitcoin Tweets: 4,820,915## Sources\nThis dataset is a combination of data from various reputable sources, each contributing a unique perspective on financial tweets:\n- Stock Market Tweets Data: 923,673 rows of stock tweets\n- Stock Market Tweets: 1,700,641 rows of stock tweets\n- Crypto Tweets: 10,438 rows of cryptocurrency tweets\n- Influencers' Tweets In Cryptocurrency: 16,512 rows of cryptocurrency tweets\n- Bitcoin Tweets: 76,797 of bitcoin tweets\n- Bitcoin Tweets: 4,863,751 rows of bitcoin tweets\n- Crypto Tweets: 80,000 rows of cryptocurrency tweets\n- Cryptocurreny Sentiment Tweets: 824,908 rows of cryptocurrency tweets\n- Financial Tweets: 263,119 rows of financial tweets\n- Cryptocurrency Tweets: 576,836 rows of cryptocurrency tweets## Usage\nThis dataset can be used for pre-training language models on financial tweets.## Pre-processing Steps\nOriginally the combined datasets consist of 9,336,675 rows. However, there are some duplicates and not useful tweets in there. The dataset has been cleaned of 't.co' URLs, duplicate text, empty text, and tweets that end with '...'.\nAs a result the cleaned dataset consist of 8,024,269 rows, which is the one available here.## Acknowledgements\n\nWe extend our heartfelt gratitude to all the authors and contributors of the original datasets. Their efforts in data collection and curation have been pivotal in creating this comprehensive resource.## License\n\nThis dataset is made available under the CC-BY-4.0 license, adhering to the licensing terms of the original datasets."
] |
3020a6573609c077aa698992054e8e3953475be2 |
Homepage: https://github.com/GGLAB-KU/turkish-plu | mcemilg/turkish-plu-step-inference | [
"task_categories:text-classification",
"size_categories:100K<n<1M",
"language:tr",
"region:us"
] | 2023-12-22T12:14:22+00:00 | {"language": ["tr"], "size_categories": ["100K<n<1M"], "task_categories": ["text-classification"]} | 2023-12-22T12:17:40+00:00 | [] | [
"tr"
] | TAGS
#task_categories-text-classification #size_categories-100K<n<1M #language-Turkish #region-us
|
Homepage: URL | [] | [
"TAGS\n#task_categories-text-classification #size_categories-100K<n<1M #language-Turkish #region-us \n"
] | [
35
] | [
"passage: TAGS\n#task_categories-text-classification #size_categories-100K<n<1M #language-Turkish #region-us \n"
] |
50eb359aac75fe6bc43086bfa14e5a96baf57d8b | # Description
Translated versions of the [AlpacaEval prompt dataset](https://huggingface.co/datasets/tatsu-lab/alpaca_eval) for evaluating the performance of chat LLMs.
Translations were generated using `gpt-3.5-turbo-0613` using the following prompt template (adapted from [Lai et al, 2023](https://arxiv.org/pdf/2307.16039.pdf)):
```
You are a helpful assistant.
Translate the following text into {{target_language}}.
Keep the structure of the original text and preserve things like code and names.
Please ensure that your response contains only the translated text.
The translation must convey the same meaning as the original and be natural for
native speakers with correct grammar and proper word choices.
Your translation must also use exact terminology to provide
accurate information even for the experts in the related fields.
Original: {{source_text}}
Translation into {{target_language}}:
```
# Usage
```python
from datasets import load_dataset
ds = load_dataset('ZurichNLP/mlit-alpaca-eval', 'ca')
print(ds)
>>> DatasetDict({
test: Dataset({
features: ['instruction'],
num_rows: 805
})
})
```
# Citation
```
@misc{kew2023turning,
title={Turning English-centric LLMs Into Polyglots: How Much Multilinguality Is Needed?},
author={Tannon Kew and Florian Schottmann and Rico Sennrich},
year={2023},
eprint={2312.12683},
archivePrefix={arXiv},
primaryClass={cs.CL}
}
```
---
dataset_info:
- config_name: ca
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 154255
num_examples: 805
download_size: 99320
dataset_size: 154255
- config_name: da
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 144724
num_examples: 805
download_size: 96555
dataset_size: 144724
- config_name: de
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 164871
num_examples: 805
download_size: 109435
dataset_size: 164871
- config_name: el
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 287985
num_examples: 805
download_size: 143043
dataset_size: 287985
- config_name: en
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 136100
num_examples: 805
download_size: 88817
dataset_size: 136100
- config_name: es
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 157880
num_examples: 805
download_size: 100029
dataset_size: 157880
- config_name: fr
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 168389
num_examples: 805
download_size: 104885
dataset_size: 168389
- config_name: hi
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 353161
num_examples: 805
download_size: 140012
dataset_size: 353161
- config_name: is
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 152739
num_examples: 805
download_size: 99913
dataset_size: 152739
- config_name: 'no'
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 141316
num_examples: 805
download_size: 94018
dataset_size: 141316
- config_name: ru
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 262317
num_examples: 805
download_size: 133403
dataset_size: 262317
- config_name: sv
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 146366
num_examples: 805
download_size: 96223
dataset_size: 146366
- config_name: zh
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 125499
num_examples: 805
download_size: 87092
dataset_size: 125499
configs:
- config_name: ca
data_files:
- split: test
path: ca/test-*
- config_name: da
data_files:
- split: test
path: da/test-*
- config_name: de
data_files:
- split: test
path: de/test-*
- config_name: el
data_files:
- split: test
path: el/test-*
- config_name: en
data_files:
- split: test
path: en/test-*
- config_name: es
data_files:
- split: test
path: es/test-*
- config_name: fr
data_files:
- split: test
path: fr/test-*
- config_name: hi
data_files:
- split: test
path: hi/test-*
- config_name: is
data_files:
- split: test
path: is/test-*
- config_name: 'no'
data_files:
- split: test
path: no/test-*
- config_name: ru
data_files:
- split: test
path: ru/test-*
- config_name: sv
data_files:
- split: test
path: sv/test-*
- config_name: zh
data_files:
- split: test
path: zh/test-*
license: cc
task_categories:
- conversational
- question-answering
language:
- en
- ca
- bg
- da
- de
- el
- es
- fr
- hi
- is
- 'no'
- ru
- sv
- zh
---
---
dataset_info:
- config_name: ca
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 154255
num_examples: 805
download_size: 99320
dataset_size: 154255
- config_name: da
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 144724
num_examples: 805
download_size: 96555
dataset_size: 144724
- config_name: de
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 164871
num_examples: 805
download_size: 109435
dataset_size: 164871
- config_name: el
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 287985
num_examples: 805
download_size: 143043
dataset_size: 287985
- config_name: en
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 136100
num_examples: 805
download_size: 88817
dataset_size: 136100
- config_name: es
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 157880
num_examples: 805
download_size: 100029
dataset_size: 157880
- config_name: fr
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 168389
num_examples: 805
download_size: 104885
dataset_size: 168389
- config_name: hi
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 353161
num_examples: 805
download_size: 140012
dataset_size: 353161
- config_name: is
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 152739
num_examples: 805
download_size: 99913
dataset_size: 152739
- config_name: 'no'
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 141316
num_examples: 805
download_size: 94018
dataset_size: 141316
- config_name: ru
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 262317
num_examples: 805
download_size: 133403
dataset_size: 262317
- config_name: sv
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 146366
num_examples: 805
download_size: 96223
dataset_size: 146366
- config_name: zh
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 125499
num_examples: 805
download_size: 87092
dataset_size: 125499
configs:
- config_name: ca
data_files:
- split: test
path: ca/test-*
- config_name: da
data_files:
- split: test
path: da/test-*
- config_name: de
data_files:
- split: test
path: de/test-*
- config_name: el
data_files:
- split: test
path: el/test-*
- config_name: en
data_files:
- split: test
path: en/test-*
- config_name: es
data_files:
- split: test
path: es/test-*
- config_name: fr
data_files:
- split: test
path: fr/test-*
- config_name: hi
data_files:
- split: test
path: hi/test-*
- config_name: is
data_files:
- split: test
path: is/test-*
- config_name: 'no'
data_files:
- split: test
path: no/test-*
- config_name: ru
data_files:
- split: test
path: ru/test-*
- config_name: sv
data_files:
- split: test
path: sv/test-*
- config_name: zh
data_files:
- split: test
path: zh/test-*
--- | ZurichNLP/mlit-alpaca-eval | [
"arxiv:2307.16039",
"arxiv:2312.12683",
"region:us"
] | 2023-12-22T12:15:07+00:00 | {"dataset_info": [{"config_name": "ca", "features": [{"name": "instruction", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 154255, "num_examples": 805}], "download_size": 99320, "dataset_size": 154255}, {"config_name": "da", "features": [{"name": "instruction", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 144724, "num_examples": 805}], "download_size": 96555, "dataset_size": 144724}, {"config_name": "de", "features": [{"name": "instruction", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 164871, "num_examples": 805}], "download_size": 109435, "dataset_size": 164871}, {"config_name": "el", "features": [{"name": "instruction", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 287985, "num_examples": 805}], "download_size": 143043, "dataset_size": 287985}, {"config_name": "en", "features": [{"name": "instruction", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 136100, "num_examples": 805}], "download_size": 88817, "dataset_size": 136100}, {"config_name": "es", "features": [{"name": "instruction", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 157880, "num_examples": 805}], "download_size": 100029, "dataset_size": 157880}, {"config_name": "fr", "features": [{"name": "instruction", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 168389, "num_examples": 805}], "download_size": 104885, "dataset_size": 168389}, {"config_name": "hi", "features": [{"name": "instruction", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 353161, "num_examples": 805}], "download_size": 140012, "dataset_size": 353161}, {"config_name": "is", "features": [{"name": "instruction", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 152739, "num_examples": 805}], "download_size": 99913, "dataset_size": 152739}, {"config_name": "no", "features": [{"name": "instruction", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 141316, "num_examples": 805}], "download_size": 94018, "dataset_size": 141316}, {"config_name": "ru", "features": [{"name": "instruction", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 262317, "num_examples": 805}], "download_size": 133403, "dataset_size": 262317}, {"config_name": "sv", "features": [{"name": "instruction", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 146366, "num_examples": 805}], "download_size": 96223, "dataset_size": 146366}, {"config_name": "zh", "features": [{"name": "instruction", "dtype": "string"}], "splits": [{"name": "test", "num_bytes": 125499, "num_examples": 805}], "download_size": 87092, "dataset_size": 125499}], "configs": [{"config_name": "ca", "data_files": [{"split": "test", "path": "ca/test-*"}]}, {"config_name": "da", "data_files": [{"split": "test", "path": "da/test-*"}]}, {"config_name": "de", "data_files": [{"split": "test", "path": "de/test-*"}]}, {"config_name": "el", "data_files": [{"split": "test", "path": "el/test-*"}]}, {"config_name": "en", "data_files": [{"split": "test", "path": "en/test-*"}]}, {"config_name": "es", "data_files": [{"split": "test", "path": "es/test-*"}]}, {"config_name": "fr", "data_files": [{"split": "test", "path": "fr/test-*"}]}, {"config_name": "hi", "data_files": [{"split": "test", "path": "hi/test-*"}]}, {"config_name": "is", "data_files": [{"split": "test", "path": "is/test-*"}]}, {"config_name": "no", "data_files": [{"split": "test", "path": "no/test-*"}]}, {"config_name": "ru", "data_files": [{"split": "test", "path": "ru/test-*"}]}, {"config_name": "sv", "data_files": [{"split": "test", "path": "sv/test-*"}]}, {"config_name": "zh", "data_files": [{"split": "test", "path": "zh/test-*"}]}]} | 2023-12-22T13:41:21+00:00 | [
"2307.16039",
"2312.12683"
] | [] | TAGS
#arxiv-2307.16039 #arxiv-2312.12683 #region-us
| # Description
Translated versions of the AlpacaEval prompt dataset for evaluating the performance of chat LLMs.
Translations were generated using 'gpt-3.5-turbo-0613' using the following prompt template (adapted from Lai et al, 2023):
# Usage
---
dataset_info:
- config_name: ca
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 154255
num_examples: 805
download_size: 99320
dataset_size: 154255
- config_name: da
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 144724
num_examples: 805
download_size: 96555
dataset_size: 144724
- config_name: de
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 164871
num_examples: 805
download_size: 109435
dataset_size: 164871
- config_name: el
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 287985
num_examples: 805
download_size: 143043
dataset_size: 287985
- config_name: en
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 136100
num_examples: 805
download_size: 88817
dataset_size: 136100
- config_name: es
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 157880
num_examples: 805
download_size: 100029
dataset_size: 157880
- config_name: fr
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 168389
num_examples: 805
download_size: 104885
dataset_size: 168389
- config_name: hi
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 353161
num_examples: 805
download_size: 140012
dataset_size: 353161
- config_name: is
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 152739
num_examples: 805
download_size: 99913
dataset_size: 152739
- config_name: 'no'
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 141316
num_examples: 805
download_size: 94018
dataset_size: 141316
- config_name: ru
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 262317
num_examples: 805
download_size: 133403
dataset_size: 262317
- config_name: sv
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 146366
num_examples: 805
download_size: 96223
dataset_size: 146366
- config_name: zh
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 125499
num_examples: 805
download_size: 87092
dataset_size: 125499
configs:
- config_name: ca
data_files:
- split: test
path: ca/test-*
- config_name: da
data_files:
- split: test
path: da/test-*
- config_name: de
data_files:
- split: test
path: de/test-*
- config_name: el
data_files:
- split: test
path: el/test-*
- config_name: en
data_files:
- split: test
path: en/test-*
- config_name: es
data_files:
- split: test
path: es/test-*
- config_name: fr
data_files:
- split: test
path: fr/test-*
- config_name: hi
data_files:
- split: test
path: hi/test-*
- config_name: is
data_files:
- split: test
path: is/test-*
- config_name: 'no'
data_files:
- split: test
path: no/test-*
- config_name: ru
data_files:
- split: test
path: ru/test-*
- config_name: sv
data_files:
- split: test
path: sv/test-*
- config_name: zh
data_files:
- split: test
path: zh/test-*
license: cc
task_categories:
- conversational
- question-answering
language:
- en
- ca
- bg
- da
- de
- el
- es
- fr
- hi
- is
- 'no'
- ru
- sv
- zh
---
---
dataset_info:
- config_name: ca
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 154255
num_examples: 805
download_size: 99320
dataset_size: 154255
- config_name: da
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 144724
num_examples: 805
download_size: 96555
dataset_size: 144724
- config_name: de
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 164871
num_examples: 805
download_size: 109435
dataset_size: 164871
- config_name: el
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 287985
num_examples: 805
download_size: 143043
dataset_size: 287985
- config_name: en
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 136100
num_examples: 805
download_size: 88817
dataset_size: 136100
- config_name: es
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 157880
num_examples: 805
download_size: 100029
dataset_size: 157880
- config_name: fr
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 168389
num_examples: 805
download_size: 104885
dataset_size: 168389
- config_name: hi
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 353161
num_examples: 805
download_size: 140012
dataset_size: 353161
- config_name: is
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 152739
num_examples: 805
download_size: 99913
dataset_size: 152739
- config_name: 'no'
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 141316
num_examples: 805
download_size: 94018
dataset_size: 141316
- config_name: ru
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 262317
num_examples: 805
download_size: 133403
dataset_size: 262317
- config_name: sv
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 146366
num_examples: 805
download_size: 96223
dataset_size: 146366
- config_name: zh
features:
- name: instruction
dtype: string
splits:
- name: test
num_bytes: 125499
num_examples: 805
download_size: 87092
dataset_size: 125499
configs:
- config_name: ca
data_files:
- split: test
path: ca/test-*
- config_name: da
data_files:
- split: test
path: da/test-*
- config_name: de
data_files:
- split: test
path: de/test-*
- config_name: el
data_files:
- split: test
path: el/test-*
- config_name: en
data_files:
- split: test
path: en/test-*
- config_name: es
data_files:
- split: test
path: es/test-*
- config_name: fr
data_files:
- split: test
path: fr/test-*
- config_name: hi
data_files:
- split: test
path: hi/test-*
- config_name: is
data_files:
- split: test
path: is/test-*
- config_name: 'no'
data_files:
- split: test
path: no/test-*
- config_name: ru
data_files:
- split: test
path: ru/test-*
- config_name: sv
data_files:
- split: test
path: sv/test-*
- config_name: zh
data_files:
- split: test
path: zh/test-*
--- | [
"# Description\n\nTranslated versions of the AlpacaEval prompt dataset for evaluating the performance of chat LLMs.\n\nTranslations were generated using 'gpt-3.5-turbo-0613' using the following prompt template (adapted from Lai et al, 2023):",
"# Usage\n\n\n\n---\ndataset_info:\n- config_name: ca\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 154255\n num_examples: 805\n download_size: 99320\n dataset_size: 154255\n- config_name: da\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 144724\n num_examples: 805\n download_size: 96555\n dataset_size: 144724\n- config_name: de\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 164871\n num_examples: 805\n download_size: 109435\n dataset_size: 164871\n- config_name: el\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 287985\n num_examples: 805\n download_size: 143043\n dataset_size: 287985\n- config_name: en\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 136100\n num_examples: 805\n download_size: 88817\n dataset_size: 136100\n- config_name: es\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 157880\n num_examples: 805\n download_size: 100029\n dataset_size: 157880\n- config_name: fr\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 168389\n num_examples: 805\n download_size: 104885\n dataset_size: 168389\n- config_name: hi\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 353161\n num_examples: 805\n download_size: 140012\n dataset_size: 353161\n- config_name: is\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 152739\n num_examples: 805\n download_size: 99913\n dataset_size: 152739\n- config_name: 'no'\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 141316\n num_examples: 805\n download_size: 94018\n dataset_size: 141316\n- config_name: ru\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 262317\n num_examples: 805\n download_size: 133403\n dataset_size: 262317\n- config_name: sv\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 146366\n num_examples: 805\n download_size: 96223\n dataset_size: 146366\n- config_name: zh\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 125499\n num_examples: 805\n download_size: 87092\n dataset_size: 125499\nconfigs:\n- config_name: ca\n data_files:\n - split: test\n path: ca/test-*\n- config_name: da\n data_files:\n - split: test\n path: da/test-*\n- config_name: de\n data_files:\n - split: test\n path: de/test-*\n- config_name: el\n data_files:\n - split: test\n path: el/test-*\n- config_name: en\n data_files:\n - split: test\n path: en/test-*\n- config_name: es\n data_files:\n - split: test\n path: es/test-*\n- config_name: fr\n data_files:\n - split: test\n path: fr/test-*\n- config_name: hi\n data_files:\n - split: test\n path: hi/test-*\n- config_name: is\n data_files:\n - split: test\n path: is/test-*\n- config_name: 'no'\n data_files:\n - split: test\n path: no/test-*\n- config_name: ru\n data_files:\n - split: test\n path: ru/test-*\n- config_name: sv\n data_files:\n - split: test\n path: sv/test-*\n- config_name: zh\n data_files:\n - split: test\n path: zh/test-*\nlicense: cc\ntask_categories:\n- conversational\n- question-answering\nlanguage:\n- en\n- ca\n- bg\n- da\n- de\n- el\n- es\n- fr\n- hi\n- is\n- 'no'\n- ru\n- sv\n- zh\n---\n\n---\ndataset_info:\n- config_name: ca\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 154255\n num_examples: 805\n download_size: 99320\n dataset_size: 154255\n- config_name: da\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 144724\n num_examples: 805\n download_size: 96555\n dataset_size: 144724\n- config_name: de\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 164871\n num_examples: 805\n download_size: 109435\n dataset_size: 164871\n- config_name: el\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 287985\n num_examples: 805\n download_size: 143043\n dataset_size: 287985\n- config_name: en\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 136100\n num_examples: 805\n download_size: 88817\n dataset_size: 136100\n- config_name: es\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 157880\n num_examples: 805\n download_size: 100029\n dataset_size: 157880\n- config_name: fr\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 168389\n num_examples: 805\n download_size: 104885\n dataset_size: 168389\n- config_name: hi\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 353161\n num_examples: 805\n download_size: 140012\n dataset_size: 353161\n- config_name: is\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 152739\n num_examples: 805\n download_size: 99913\n dataset_size: 152739\n- config_name: 'no'\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 141316\n num_examples: 805\n download_size: 94018\n dataset_size: 141316\n- config_name: ru\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 262317\n num_examples: 805\n download_size: 133403\n dataset_size: 262317\n- config_name: sv\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 146366\n num_examples: 805\n download_size: 96223\n dataset_size: 146366\n- config_name: zh\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 125499\n num_examples: 805\n download_size: 87092\n dataset_size: 125499\nconfigs:\n- config_name: ca\n data_files:\n - split: test\n path: ca/test-*\n- config_name: da\n data_files:\n - split: test\n path: da/test-*\n- config_name: de\n data_files:\n - split: test\n path: de/test-*\n- config_name: el\n data_files:\n - split: test\n path: el/test-*\n- config_name: en\n data_files:\n - split: test\n path: en/test-*\n- config_name: es\n data_files:\n - split: test\n path: es/test-*\n- config_name: fr\n data_files:\n - split: test\n path: fr/test-*\n- config_name: hi\n data_files:\n - split: test\n path: hi/test-*\n- config_name: is\n data_files:\n - split: test\n path: is/test-*\n- config_name: 'no'\n data_files:\n - split: test\n path: no/test-*\n- config_name: ru\n data_files:\n - split: test\n path: ru/test-*\n- config_name: sv\n data_files:\n - split: test\n path: sv/test-*\n- config_name: zh\n data_files:\n - split: test\n path: zh/test-*\n---"
] | [
"TAGS\n#arxiv-2307.16039 #arxiv-2312.12683 #region-us \n",
"# Description\n\nTranslated versions of the AlpacaEval prompt dataset for evaluating the performance of chat LLMs.\n\nTranslations were generated using 'gpt-3.5-turbo-0613' using the following prompt template (adapted from Lai et al, 2023):",
"# Usage\n\n\n\n---\ndataset_info:\n- config_name: ca\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 154255\n num_examples: 805\n download_size: 99320\n dataset_size: 154255\n- config_name: da\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 144724\n num_examples: 805\n download_size: 96555\n dataset_size: 144724\n- config_name: de\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 164871\n num_examples: 805\n download_size: 109435\n dataset_size: 164871\n- config_name: el\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 287985\n num_examples: 805\n download_size: 143043\n dataset_size: 287985\n- config_name: en\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 136100\n num_examples: 805\n download_size: 88817\n dataset_size: 136100\n- config_name: es\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 157880\n num_examples: 805\n download_size: 100029\n dataset_size: 157880\n- config_name: fr\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 168389\n num_examples: 805\n download_size: 104885\n dataset_size: 168389\n- config_name: hi\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 353161\n num_examples: 805\n download_size: 140012\n dataset_size: 353161\n- config_name: is\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 152739\n num_examples: 805\n download_size: 99913\n dataset_size: 152739\n- config_name: 'no'\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 141316\n num_examples: 805\n download_size: 94018\n dataset_size: 141316\n- config_name: ru\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 262317\n num_examples: 805\n download_size: 133403\n dataset_size: 262317\n- config_name: sv\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 146366\n num_examples: 805\n download_size: 96223\n dataset_size: 146366\n- config_name: zh\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 125499\n num_examples: 805\n download_size: 87092\n dataset_size: 125499\nconfigs:\n- config_name: ca\n data_files:\n - split: test\n path: ca/test-*\n- config_name: da\n data_files:\n - split: test\n path: da/test-*\n- config_name: de\n data_files:\n - split: test\n path: de/test-*\n- config_name: el\n data_files:\n - split: test\n path: el/test-*\n- config_name: en\n data_files:\n - split: test\n path: en/test-*\n- config_name: es\n data_files:\n - split: test\n path: es/test-*\n- config_name: fr\n data_files:\n - split: test\n path: fr/test-*\n- config_name: hi\n data_files:\n - split: test\n path: hi/test-*\n- config_name: is\n data_files:\n - split: test\n path: is/test-*\n- config_name: 'no'\n data_files:\n - split: test\n path: no/test-*\n- config_name: ru\n data_files:\n - split: test\n path: ru/test-*\n- config_name: sv\n data_files:\n - split: test\n path: sv/test-*\n- config_name: zh\n data_files:\n - split: test\n path: zh/test-*\nlicense: cc\ntask_categories:\n- conversational\n- question-answering\nlanguage:\n- en\n- ca\n- bg\n- da\n- de\n- el\n- es\n- fr\n- hi\n- is\n- 'no'\n- ru\n- sv\n- zh\n---\n\n---\ndataset_info:\n- config_name: ca\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 154255\n num_examples: 805\n download_size: 99320\n dataset_size: 154255\n- config_name: da\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 144724\n num_examples: 805\n download_size: 96555\n dataset_size: 144724\n- config_name: de\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 164871\n num_examples: 805\n download_size: 109435\n dataset_size: 164871\n- config_name: el\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 287985\n num_examples: 805\n download_size: 143043\n dataset_size: 287985\n- config_name: en\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 136100\n num_examples: 805\n download_size: 88817\n dataset_size: 136100\n- config_name: es\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 157880\n num_examples: 805\n download_size: 100029\n dataset_size: 157880\n- config_name: fr\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 168389\n num_examples: 805\n download_size: 104885\n dataset_size: 168389\n- config_name: hi\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 353161\n num_examples: 805\n download_size: 140012\n dataset_size: 353161\n- config_name: is\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 152739\n num_examples: 805\n download_size: 99913\n dataset_size: 152739\n- config_name: 'no'\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 141316\n num_examples: 805\n download_size: 94018\n dataset_size: 141316\n- config_name: ru\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 262317\n num_examples: 805\n download_size: 133403\n dataset_size: 262317\n- config_name: sv\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 146366\n num_examples: 805\n download_size: 96223\n dataset_size: 146366\n- config_name: zh\n features:\n - name: instruction\n dtype: string\n splits:\n - name: test\n num_bytes: 125499\n num_examples: 805\n download_size: 87092\n dataset_size: 125499\nconfigs:\n- config_name: ca\n data_files:\n - split: test\n path: ca/test-*\n- config_name: da\n data_files:\n - split: test\n path: da/test-*\n- config_name: de\n data_files:\n - split: test\n path: de/test-*\n- config_name: el\n data_files:\n - split: test\n path: el/test-*\n- config_name: en\n data_files:\n - split: test\n path: en/test-*\n- config_name: es\n data_files:\n - split: test\n path: es/test-*\n- config_name: fr\n data_files:\n - split: test\n path: fr/test-*\n- config_name: hi\n data_files:\n - split: test\n path: hi/test-*\n- config_name: is\n data_files:\n - split: test\n path: is/test-*\n- config_name: 'no'\n data_files:\n - split: test\n path: no/test-*\n- config_name: ru\n data_files:\n - split: test\n path: ru/test-*\n- config_name: sv\n data_files:\n - split: test\n path: sv/test-*\n- config_name: zh\n data_files:\n - split: test\n path: zh/test-*\n---"
] | [
23,
59,
2104
] | [
"passage: TAGS\n#arxiv-2307.16039 #arxiv-2312.12683 #region-us \n# Description\n\nTranslated versions of the AlpacaEval prompt dataset for evaluating the performance of chat LLMs.\n\nTranslations were generated using 'gpt-3.5-turbo-0613' using the following prompt template (adapted from Lai et al, 2023):"
] |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.