Papers
arxiv:2505.07608

MiMo: Unlocking the Reasoning Potential of Language Model -- From Pretraining to Posttraining

Published on May 12
· Submitted by Solaris99 on May 13
#2 Paper of the day
Authors:
,
,
,
,
,
,
,
,
,
,
,

Abstract

We present MiMo-7B, a large language model born for reasoning tasks, with optimization across both pre-training and post-training stages. During pre-training, we enhance the data preprocessing pipeline and employ a three-stage data mixing strategy to strengthen the base model's reasoning potential. MiMo-7B-Base is pre-trained on 25 trillion tokens, with additional Multi-Token Prediction objective for enhanced performance and accelerated inference speed. During post-training, we curate a dataset of 130K verifiable mathematics and programming problems for reinforcement learning, integrating a test-difficulty-driven code-reward scheme to alleviate sparse-reward issues and employing strategic data resampling to stabilize training. Extensive evaluations show that MiMo-7B-Base possesses exceptional reasoning potential, outperforming even much larger 32B models. The final RL-tuned model, MiMo-7B-RL, achieves superior performance on mathematics, code and general reasoning tasks, surpassing the performance of OpenAI o1-mini. The model checkpoints are available at https://github.com/xiaomimimo/MiMo.

Community

Paper author Paper submitter

We present MiMo, a large language model born for reasoning tasks, with optimization across both pre-training and post-training stages. Pre-trained on 25 trillion tokens, MiMo-7B-Base possesses exceptional reasoning potential. The final RL-tuned model, MiMo-7B-RL, achieves superior performance on mathematics, code and general reasoning tasks.

GitHub: https://github.com/XiaomiMiMo/MiMo

·

Thanks for your excellent work :) I believe I may have been mistakenly included in the list of authors, possibly due to a name similarity. To avoid any misunderstanding, would it be possible to kindly update the author list with the correct individual?

an audio overview for learning on the go: https://youtu.be/y6mSdLgJYQY

Sign up or log in to comment

Models citing this paper 4

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2505.07608 in a dataset README.md to link it from this page.

Spaces citing this paper 2

Collections including this paper 1