Papers
arxiv:2605.11458

Adaptive Teacher Exposure for Self-Distillation in LLM Reasoning

Published on May 12
· Submitted by
hanhan3344
on May 15
Authors:
,
,

Abstract

Adaptive Teacher Exposure for Self-Distillation (ATESD) improves large language model reasoning by dynamically adjusting teacher exposure during training through a learnable policy controller.

AI-generated summary

On-policy self-distillation has become a strong recipe for LLM reasoning, where a privileged teacher supervises the student's own rollouts while conditioning on the reference solution. A design choice shared by nearly all such methods, however, has gone unquestioned: the teacher always sees the full reference reasoning. We argue that this default itself is part of the problem and identify a teacher-side exposure mismatch: when the teacher conditions on reasoning far beyond the student's current competence, the resulting token targets become too strong to absorb. A controlled fixed-exposure sweep makes this concrete on two fronts: 1) full exposure is not reliably the best choice, and 2) student-teacher mismatch grows monotonically as the teacher sees more privileged reasoning. This motivates treating teacher exposure not as a fixed hyperparameter but as a learnable training-time control variable. We therefore propose Adaptive Teacher Exposure for Self-Distillation (ATESD). ATESD models the reveal ratio with a lightweight Beta-policy controller conditioned on compact training-state statistics, and uses one sampled exposure for a short hold window of student updates. To make this exposure controller learnable, we optimize it with a discounted learning-progress reward that scores each held decision by its effect on the student's future improvement rather than its immediate loss change, addressing the delayed credit assignment induced by on-policy distillation. Experiments on AIME 24, AIME 25, and HMMT 25 across Qwen3-{1.7B, 4B, 8B} show that ATESD consistently outperforms competitive self-distillation and RL baselines, improving over OPSD by +0.95, +2.05, and +2.33 Average@12 points respectively, and establishing adaptive teacher exposure as an effective new axis for reasoning self-distillation.

Community

@librarian-bot recommend

Paper author Paper submitter
edited about 6 hours ago

This paper explores a simple but overlooked question in self-distillation for LLM reasoning: should the teacher always see the full reference reasoning? We identify a teacher-side exposure mismatch, where fully privileged teacher signals can be too strong for the student’s current competence. Instead of fixing the teacher exposure ratio, we propose ATESD, which adaptively controls how much reference reasoning is revealed to the teacher during training. Across AIME 24, AIME 25, and HMMT 25 with Qwen3 models, adaptive teacher exposure consistently improves over strong self-distillation and RL baselines. We hope this work highlights teacher exposure as a useful new training-time control axis for reasoning self-distillation.

Sign up or log in to comment

Get this paper in your agent:

hf papers read 2605.11458
Don't have the latest CLI?
curl -LsSf https://hf.co/cli/install.sh | bash

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2605.11458 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2605.11458 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2605.11458 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.