Spaces:
Paused
GRPO Trainer
Overview
TRL supports the GRPO Trainer for training language models, as described in the paper DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models by Zhihong Shao, Peiyi Wang, Qihao Zhu, Runxin Xu, Junxiao Song, Mingchuan Zhang, Y. K. Li, Y. Wu, Daya Guo.
The abstract from the paper is the following:
Mathematical reasoning poses a significant challenge for language models due to its complex and structured nature. In this paper, we introduce DeepSeekMath 7B, which continues pre-training DeepSeek-Coder-Base-v1.5 7B with 120B math-related tokens sourced from Common Crawl, together with natural language and code data. DeepSeekMath 7B has achieved an impressive score of 51.7% on the competition-level MATH benchmark without relying on external toolkits and voting techniques, approaching the performance level of Gemini-Ultra and GPT-4. Self-consistency over 64 samples from DeepSeekMath 7B achieves 60.9% on MATH. The mathematical reasoning capability of DeepSeekMath is attributed to two key factors: First, we harness the significant potential of publicly available web data through a meticulously engineered data selection pipeline. Second, we introduce Group Relative Policy Optimization (GRPO), a variant of Proximal Policy Optimization (PPO), that enhances mathematical reasoning abilities while concurrently optimizing the memory usage of PPO.
This post-training method was contributed by Quentin Gallouédec.
Quick start
This example demonstrates how to train a model using the GRPO method. We train a Qwen 0.5B Instruct model with the prompts from the TLDR dataset (completion column is ignored!). You can view the data in the dataset here:
Below is the script to train the model.
# train_grpo.py
from datasets import load_dataset
from trl import GRPOConfig, GRPOTrainer
dataset = load_dataset("trl-lib/tldr", split="train")
# Define the reward function, which rewards completions that are close to 20 characters
def reward_len(completions, **kwargs):
return [-abs(20 - len(completion)) for completion in completions]
training_args = GRPOConfig(output_dir="Qwen2-0.5B-GRPO", logging_steps=10)
trainer = GRPOTrainer(
model="Qwen/Qwen2-0.5B-Instruct",
reward_funcs=reward_len,
args=training_args,
train_dataset=dataset,
)
trainer.train()
Execute the script using the following command:
accelerate launch train_grpo.py
Distributed across 8 GPUs, the training takes approximately 1 day.
Looking deeper into the GRPO method
GRPO is an online learning algorithm, meaning it improves iteratively by using the data generated by the trained model itself during training. The intuition behind GRPO objective is to maximize the advantage of the generated completions, while ensuring that the model remains close to the reference policy. To understand how GRPO works, it can be broken down into four main steps: Generating completions, computing the advantage, estimating the KL divergence, and computing the loss.
Generating completions
At each training step, we sample a batch of prompts and generate a set of completions for each prompt (denoted as ).
Computing the advantage
For each of the sequences, we compute the reward using a reward model. To align with the comparative nature of reward models—typically trained on datasets of comparisons between outputs for the same question—the advantage is calculated to reflect these relative comparisons. It is normalized as follows:
This approach gives the method its name: Group Relative Policy Optimization (GRPO).
It was shown in the paper Understanding R1-Zero-Like Training: A Critical Perspective that scaling by may cause a question-level difficulty bias. You can disable this scaling by setting scale_rewards=False
in [GRPOConfig
].
Estimating the KL divergence
KL divergence is estimated using the approximator introduced by Schulman et al. (2020). The approximator is defined as follows:
Computing the loss
The objective is to maximize the advantage while ensuring that the model remains close to the reference policy. Consequently, the loss is defined as follows:
where the first term represents the scaled advantage and the second term penalizes deviations from the reference policy through KL divergence.
Note that compared to the original formulation in DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models, we don't scale by because it was shown in the paper Understanding R1-Zero-Like Training: A Critical Perspective that this introduces a response-level length bias. More details in loss types.
Note that compared to the original formulation in DeepSeekMath: Pushing the Limits of Mathematical Reasoning in Open Language Models, we use by default, meaning that the KL divergence term is not used. This choice is motivated by several recent studies (e.g., Open-Reasoner-Zero: An Open Source Approach to Scaling Up Reinforcement Learning on the Base Model) which have shown that the KL divergence term is not essential for training with GRPO. As a result, it has become common practice to exclude it (e.g. Understanding R1-Zero-Like Training: A Critical Perspective, DAPO: An Open-Source LLM Reinforcement Learning System at Scale). If you wish to include the KL divergence term, you can set beta
in [GRPOConfig
] to a non-zero value.
In the original paper, this formulation is generalized to account for multiple updates after each generation (denoted , can be set with num_iterations
in [GRPOConfig
]) by leveraging the clipped surrogate objective:
where ensures that updates do not deviate excessively from the reference policy by bounding the policy ratio between and . When (default in TRL), the clipped surrogate objective simplifies to the original objective.
Loss Types
Several formulations of the objective have been proposed in the literature. Initially, the objective of GRPO was defined as follows:
where
The DAPO paper highlights the limitations of the GRPO algorithm’s sample-level loss in long-CoT scenarios, where longer responses are under-penalized, leading to poorer quality outputs. The proposed solution is a token-level normalization, which better handles longer sequences by assigning more balanced rewards to individual tokens, regardless of response length:
Furthermore, it was demonstrated in the paper Understanding R1-Zero-Like Training: A Critical Perspective that the initial GRPO formulation introduces a response length bias. They show that while the DAPO formulation reduces this bias, it does not eliminate it completely. To fully remove this bias, they propose dividing by a constant instead of the sequence length, resulting in the following formulation:
This constant is recommended to be the maximum completion length. To use this formulation, set loss_type="dr_grpo"
in the [GRPOConfig
].
Logged metrics
num_tokens
: The total number of tokens processed so far, including both prompts and completions.completions/mean_length
: The average length of generated completions.completions/min_length
: The minimun length of generated completions.completions/max_length
: The maximum length of generated completions.completions/mean_terminated_length
: The average length of generated completions that terminate with EOS.completions/min_terminated_length
: The minimun length of generated completions that terminate with EOS.completions/max_terminated_length
: The maximum length of generated completions that terminate with EOS.completions/clipped_ratio
: The ratio of truncated (clipped) completions.reward/{reward_func_name}/mean
: The average reward from a specific reward function.reward/{reward_func_name}/std
: The standard deviation of the reward from a specific reward function.reward
: The overall average reward after applying reward weights.reward_std
: The standard deviation of the overall reward within each batch after applying reward weights.frac_reward_zero_std
: The fraction of samples in the generation batch with a reward std of zero, implying there is little diversity for that prompt (all answers are correct or incorrect).kl
: The average KL divergence between the model and the reference model, calculated over generated completions. Logged only ifbeta
is nonzero.clip_ratio/region_mean
: The ratio of token probabilities where the GRPO objective is clipped to stay within the trust region: A higher value means more tokens are clipped, which constrains how much the policy $\pi_\theta$ can change.clip_ratio/low_mean
: The average ratio of token probabilities that were clipped on the lower bound of the trust region:clip_ratio/low_min
: The minimum ratio of token probabilities that were clipped on the lower bound of the trust region:clip_ratio/high_mean
: The average ratio of token probabilities that were clipped on the upper bound of the trust region:clip_ratio/high_max
: The maximum ratio of token probabilities that were clipped on the upper bound of the trust region: .
Customization
Speed up training with vLLM-powered generation
Generation is often the main bottleneck when training with online methods. To accelerate generation, you can use vLLM, a high-throughput, low-latency inference engine for LLMs. To enable it, first install the package with
pip install trl[vllm]
We support two ways of using vLLM during training: server mode and colocate mode.
🔌 Option 1: Server mode
In this mode, vLLM runs in a separate process (and using separate GPUs) and communicates with the trainer via HTTP. This is ideal if you have dedicated GPUs for inference.
Start the vLLM server:
trl vllm-serve --model <model_name>
Enable server mode in your training script:
from trl import GRPOConfig training_args = GRPOConfig( ..., use_vllm=True, vllm_mode="server", # default value, can be omitted )
Make sure that the server is using different GPUs than the trainer, otherwise you may run into NCCL errors. You can specify the GPUs to use with the CUDA_VISIBLE_DEVICES
environment variable.
🧩 Option 2: Colocate mode
In this mode, vLLM runs inside the trainer process and shares GPU memory with the training model. This avoids launching a separate server and can improve GPU utilization, but may lead to memory contention on the training GPUs.
from trl import GRPOConfig
training_args = GRPOConfig(
...,
use_vllm=True,
vllm_mode="colocate",
)
Depending on the model size and the overall GPU memory requirements for training, you may need to adjust the vllm_gpu_memory_utilization
parameter in [GRPOConfig
] to avoid underutilization or out-of-memory errors.
For more information, see Speeding up training with vLLM.
GRPO at scale: train a 70B+ Model on multiple nodes
When training large models like Qwen2.5-72B, you need several key optimizations to make the training efficient and scalable across multiple GPUs and nodes. These include:
- DeepSpeed ZeRO Stage 3: ZeRO leverages data parallelism to distribute model states (weights, gradients, optimizer states) across multiple GPUs and CPUs, reducing memory and compute requirements on each device. Since large models cannot fit on a single GPU, using ZeRO Stage 3 is required for training such model. For more details, see DeepSpeed Integration.
- Accelerate: Accelerate is a library that simplifies distributed training across multiple GPUs and nodes. It provides a simple API to launch distributed training and handles the complexities of distributed training, such as data parallelism, gradient accumulation, and distributed data loading. For more details, see Distributing Training.
- vLLM: See the previous section on how to use vLLM to speed up generation.
Below is an example SLURM script to train a 70B model with GRPO on multiple nodes. This script trains a model on 4 nodes and uses the 5th node for vLLM-powered generation.
#!/bin/bash
#SBATCH --nodes=5
#SBATCH --gres=gpu:8
# Get the list of allocated nodes
NODELIST=($(scontrol show hostnames $SLURM_JOB_NODELIST))
# Assign the first 4 nodes for training and the 5th node for vLLM
TRAIN_NODES="${NODELIST[@]:0:4}" # Nodes 0, 1, 2, 3 for training
VLLM_NODE="${NODELIST[4]}" # Node 4 for vLLM
# Run training on the first 4 nodes (Group 1)
srun --nodes=4 --ntasks=4 --nodelist="${NODELIST[@]:0:4}" accelerate launch \
--config_file examples/accelerate_configs/deepspeed_zero3.yaml \
--num_processes 32 \
--num_machines 4 \
--main_process_ip ${NODELIST[0]} \
--machine_rank $SLURM_PROCID \
--rdzv_backend c10d \
train_grpo.py \
--server_ip $VLLM_NODE &
# Run vLLM server on the 5th node (Group 2)
srun --nodes=1 --ntasks=1 --nodelist="${NODELIST[4]}" trl vllm-serve --model Qwen/Qwen2.5-72B --tensor_parallel_size 8 &
wait
import argparse
from datasets import load_dataset
from trl import GRPOTrainer, GRPOConfig
def main():
parser = argparse.ArgumentParser()
parser.add_argument("--vllm_server_host", type=str, default="", help="The server IP")
args = parser.parse_args()
# Example dataset from TLDR
dataset = load_dataset("trl-lib/tldr", split="train")
# Dummy reward function: count the number of unique characters in the completions
def reward_num_unique_chars(completions, **kwargs):
return [len(set(c)) for c in completions]
training_args = GRPOConfig(
output_dir="Qwen2.5-72B-GRPO",
per_device_train_batch_size=4,
bf16=True,
gradient_checkpointing=True,
logging_steps=10,
use_vllm=True,
vllm_server_host=args.vllm_server_host.replace("ip-", "").replace("-", "."), # from ip-X-X-X-X to X.X.X.X
)
trainer = GRPOTrainer(model="Qwen/Qwen2.5-72B", args=training_args, reward_funcs=reward_num_unique_chars, train_dataset=dataset)
trainer.train()
if __name__=="__main__":
main()
Using a custom reward function
The [GRPOTrainer
] supports using custom reward functions instead of dense reward models. To ensure compatibility, your reward function must satisfy the following requirements:
Input arguments:
The function must accept the following as keyword arguments:
prompts
(contains the prompts),completions
(contains the generated completions),completions_ids
(contains the tokenized completions),- All columns names (but
prompt
) that the dataset may have. For example, if the dataset contains a column namedground_truth
, the function will be called withground_truth
as a keyword argument.
The easiest way to comply with this requirement is to use
**kwargs
in the function signature.Depending on the dataset format, the input will vary:
- For standard format,
prompts
andcompletions
will be lists of strings. - For conversational format,
prompts
andcompletions
will be lists of message dictionaries.
- For standard format,
Return value: The function must return a list of floats. Each float represents the reward corresponding to a single completion.
Example 1: Reward longer completions
Below is an example of a reward function for a standard format that rewards longer completions:
def reward_func(completions_ids, **kwargs):
"""Reward function that assigns higher scores to longer completions (in terms of token count)."""
return [float(len(ids)) for ids in completions_ids]
You can test it as follows:
>>> prompts = ["The sky is", "The sun is"] # not used in the reward function, but the trainer will pass it
>>> completions = [" blue.", " in the sky."] # not used in the reward function, but the trainer will pass it
>>> completions_ids = [[6303, 13], [304, 279, 12884, 13]]
>>> reward_func(prompts=prompts, completions=completions, completions_ids=completions_ids)
[2.0, 4.0]
Example 1.1: Reward longer completions (based in the number of characters)
Same as the previous example, but this time the reward function is based on the number of characters instead of tokens.
def reward_func(completions, **kwargs):
"""Reward function that assigns higher scores to longer completions (in terms of character count)."""
return [float(len(completion)) for completion in completions]
You can test it as follows:
>>> prompts = ["The sky is", "The sun is"]
>>> completions = [" blue.", " in the sky."]
>>> completions_ids = [[6303, 13], [304, 279, 12884, 13]] # not used in the reward function, but the trainer will pass it
>>> reward_func(prompts=prompts, completions=completions, completions_ids=completions_ids)
[6.0, 12.0]
Example 2: Reward completions with specific format
Below is an example of a reward function that checks if the completion has a specific format. This example is inspired by the format reward function used in the paper DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning. It is designed for conversational format, where prompts and completions consist of structured messages.
import re
def format_reward_func(completions, **kwargs):
"""Reward function that checks if the completion has a specific format."""
pattern = r"^<think>.*?</think><answer>.*?</answer>$"
completion_contents = [completion[0]["content"] for completion in completions]
matches = [re.match(pattern, content) for content in completion_contents]
return [1.0 if match else 0.0 for match in matches]
You can test this function as follows:
>>> prompts = [
... [{"role": "assistant", "content": "What is the result of (1 + 2) * 4?"}],
... [{"role": "assistant", "content": "What is the result of (3 + 1) * 2?"}],
... ]
>>> completions = [
... [{"role": "assistant", "content": "<think>The sum of 1 and 2 is 3, which we multiply by 4 to get 12.</think><answer>(1 + 2) * 4 = 12</answer>"}],
... [{"role": "assistant", "content": "The sum of 3 and 1 is 4, which we multiply by 2 to get 8. So (3 + 1) * 2 = 8."}],
... ]
>>> format_reward_func(prompts=prompts, completions=completions)
[1.0, 0.0]
Example 3: Reward completions based on a reference
Below is an example of a reward function that checks if the completion is correct. This example is inspired by the accuracy reward function used in the paper DeepSeek-R1: Incentivizing Reasoning Capability in LLMs via Reinforcement Learning.
This example is designed for standard format, where the dataset contains a column named ground_truth
.
import re
def reward_func(completions, ground_truth, **kwargs):
# Regular expression to capture content inside \boxed{}
matches = [re.search(r"\\boxed\{(.*?)\}", completion) for completion in completions]
contents = [match.group(1) if match else "" for match in matches]
# Reward 1 if the content is the same as the ground truth, 0 otherwise
return [1.0 if c == gt else 0.0 for c, gt in zip(contents, ground_truth)]
You can test this function as follows:
>>> prompts = ["Problem: Solve the equation $2x + 3 = 7$. Solution:", "Problem: Solve the equation $3x - 5 = 10$."]
>>> completions = [r" The solution is \boxed{2}.", r" The solution is \boxed{6}."]
>>> ground_truth = ["2", "5"]
>>> reward_func(prompts=prompts, completions=completions, ground_truth=ground_truth)
[1.0, 0.0]
Example 4: Multi-task reward functions
Below is an example of using multiple reward functions in the [GRPOTrainer
]. In this example, we define two task-specific reward functions: math_reward_func
and coding_reward_func
. The math_reward_func
rewards math problems based on their correctness, while the coding_reward_func
rewards coding problems based on whether the solution works.
from datasets import Dataset
from trl import GRPOTrainer
# Define a dataset that contains both math and coding problems
dataset = Dataset.from_list(
[
{"prompt": "What is 2+2?", "task": "math"},
{"prompt": "Write a function that returns the sum of two numbers.", "task": "code"},
{"prompt": "What is 3*4?", "task": "math"},
{"prompt": "Write a function that returns the product of two numbers.", "task": "code"},
]
)
# Math-specific reward function
def math_reward_func(prompts, completions, task, **kwargs):
rewards = []
for prompt, completion, t in zip(prompts, completions, task):
if t == "math":
# Calculate math-specific reward
correct = check_math_solution(prompt, completion)
reward = 1.0 if correct else -1.0
rewards.append(reward)
else:
# Return None for non-math tasks
rewards.append(None)
return rewards
# Coding-specific reward function
def coding_reward_func(prompts, completions, task, **kwargs):
rewards = []
for prompt, completion, t in zip(prompts, completions, task):
if t == "coding":
# Calculate coding-specific reward
works = test_code_solution(prompt, completion)
reward = 1.0 if works else -1.0
rewards.append(reward)
else:
# Return None for non-coding tasks
rewards.append(None)
return rewards
# Use both task-specific reward functions
trainer = GRPOTrainer(
model="Qwen/Qwen2-0.5B-Instruct",
reward_funcs=[math_reward_func, coding_reward_func],
train_dataset=dataset,
)
trainer.train()
In this example, the math_reward_func
and coding_reward_func
are designed to work with a mixed dataset that contains both math and coding problems. The task
column in the dataset is used to determine which reward function to apply to each problem. If there is no relevant reward function for a sample in the dataset, the reward function will return None
and the [GRPOTrainer
] will continue with the valid functions and tasks. This allows the [GRPOTrainer
] to handle multiple reward functions with different applicability.
Note that the [GRPOTrainer
] will ignore the None
rewards returned by the reward functions and only consider the rewards returned by the relevant functions. This ensures that the model is trained on the relevant tasks and ignores the tasks for which there is no relevant reward function.
Passing the reward function to the trainer
To use your custom reward function, pass it to the [GRPOTrainer
] as follows:
from trl import GRPOTrainer
trainer = GRPOTrainer(
reward_funcs=reward_func,
...,
)
If you have multiple reward functions, you can pass them as a list:
from trl import GRPOTrainer
trainer = GRPOTrainer(
reward_funcs=[reward_func1, reward_func2],
...,
)
and the reward will be computed as the sum of the rewards from each function, or the weighted sum if reward_weights
is provided in the config.
Note that [GRPOTrainer
] supports multiple reward functions of different types. See the parameters documentation for more details.
GRPOTrainer
[[autodoc]] GRPOTrainer
GRPOConfig
[[autodoc]] GRPOConfig