metadata
license: mit
library_name: transformers
pipeline_tag: text-generation
This repository contains the R1-Code-Interpreter models described in R1-Code-Interpreter: Training LLMs to Reason with Code via Supervised and Reinforcement Learning.
The models are fine-tuned Qwen-2.5 models (3B/7B/14B) trained using supervised fine-tuning (SFT) and reinforcement learning (RL) to generate code during step-by-step reasoning.
For code and further details, please refer to the Github repository and the project page.