|
# Linear Next Benchmark |
|
|
|
Linear Next is a comprehensive benchmark designed to fairly compare various efficient transformer architectures. This project evaluates different approaches including linear attention, sparse attention, and other model structures under identical training conditions and datasets. |
|
|
|
## Overview |
|
|
|
The benchmark aims to provide an unbiased comparison of efficient transformer variants by ensuring all models are trained with the same datasets, hyperparameters, and evaluation metrics. This allows for a clear understanding of the relative strengths and weaknesses of each approach. |
|
|
|
## Datasets |
|
|
|
The benchmark utilizes a diverse collection of high-quality datasets: |
|
|
|
### General Text |
|
- **DCLM-pro**: A large-scale dataset containing diverse text from various domains, designed for general language modeling tasks. |
|
- **Cosmopedia-v2**: A curated corpus of high-quality web content covering a wide range of topics, with emphasis on educational and informative material. |
|
- **Fineweb-edu**: A filtered collection of educational web content, focusing on instructional and academic text from reliable sources. |
|
|
|
### Code |
|
- **The Stack v2**: A comprehensive collection of source code spanning multiple programming languages, designed to train models on code understanding and generation tasks. |
|
|
|
### Mathematics |
|
- **Finemath**: A specialized dataset containing mathematical content, including equations, proofs, and mathematical explanations across various difficulty levels. |
|
|
|
### Reasoning |
|
- **Natural Reasoning**: A dataset focused on logical reasoning, problem-solving, and inference tasks, designed to improve models' reasoning capabilities. |
|
|
|
## Methodology |
|
|
|
All models in the Linear Next benchmark are evaluated using identical: |
|
- Training datasets and data mixing ratios |
|
- Optimization parameters |
|
- Hardware configurations |
|
- Evaluation metrics |
|
|
|
This controlled environment ensures that performance differences can be attributed to the architectural differences rather than training conditions. |
|
|
|
## Results |
|
|
|
Detailed benchmark results, including training curves, inference speed, memory usage, and performance metrics across different tasks, are available in the project repository. |
|
|