File size: 1,704 Bytes
5da6d00 560ecf8 cc8291d 560ecf8 5da6d00 bf4c207 5da6d00 55f35f7 5da6d00 8f45f87 5da6d00 560ecf8 5da6d00 560ecf8 5da6d00 560ecf8 5da6d00 560ecf8 5da6d00 560ecf8 5da6d00 560ecf8 5da6d00 55f35f7 |
1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33 34 35 36 37 38 39 40 41 42 43 44 45 46 47 |
---
datasets:
- monology/pile-uncopyrighted
- MiniLLM/pile-diff_samp-qwen_1.8B-qwen_104M-r0.5
language:
- en
library_name: transformers
license: apache-2.0
metrics:
- accuracy
pipeline_tag: text-generation
---
# MinPLM-Qwen-1.2B
[paper](https://arxiv.org/abs/2410.17215) | [code](https://github.com/thu-coai/MiniPLM)
**MiniPLM-Qwen-1.2B** is a 1.2B model with Qwen achitecture pre-trained from scratch on [the Pile](https://huggingface.co/datasets/monology/pile-uncopyrighted) using the MiniPLM knowledge distillation framework with the [offcial Qwen1.5-1.8B](https://huggingface.co/Qwen/Qwen1.5-1.8B) as the teacher model.
We also open-source the [pre-training corpus](https://huggingface.co/datasets/MiniLLM/pile-diff_samp-qwen_1.8B-qwen_104M-r0.5) refined by Difference Sampling in MiniPLM for reproducibility.
<p align='left'>
<img src="https://cdn-uploads.huggingface.co/production/uploads/624ac662102fcdff87be51b9/2BqT0NgkmIXYlktovw9kG.png" width="1000">
</p>
## Evaluation
MiniPLM models achieves better performance given the same computation and scales well across model sizes:
<p align='left'>
<img src="https://cdn-uploads.huggingface.co/production/uploads/624ac662102fcdff87be51b9/EOYzajQcwQFT5PobqL3j0.png" width="1000">
</p>
## Baseline Models
+ [Conventional Pre-Training](https://huggingface.co/MiniLLM/Pretrain-Qwen-1.2B)
+ [VanillaKD](https://huggingface.co/MiniLLM/VanillaKD-Pretrain-Qwen-1.2B)
## Citation
```bibtext
@article{miniplm,
title={MiniPLM: Knowledge Distillation for Pre-Training Language Models},
author={Yuxian Gu and Hao Zhou and Fandong Meng and Jie Zhou and Minlie Huang},
journal={arXiv preprint arXiv:2410.17215},
year={2024}
}
``` |