DeepScaleR-1.5B-Preview-Reproduce

Overview

This model is a reproduction of the agentica-project/deepscaler project. We have reproduced the results in the repo on an 8x80G node, achieving an average score of 56.7.

Training

export VLLM_ATTENTION_BACKEND=XFORMERS

# Run 8K context length training, 580 steps
export MODEL_PATH="deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B"
nohup bash run_deepscaler_1.5b_8k.sh --model $MODEL_PATH > stage1.log 2>&1 &

# Run 16K context length training, 430 steps
export MODEL_PATH="./checkpoints/deepscaler/deepscaler-1.5b-8k/actor/global_step_580"
nohup bash run_deepscaler_1.5b_16k.sh --model $MODEL_PATH > stage2.log 2>&1 &

# Run 24K context length training, 430 steps
export MODEL_PATH="./checkpoints/deepscaler/deepscaler-1.5b-16k/actor/global_step_430"
nohup bash run_deepscaler_1.5b_24k.sh --model $MODEL_PATH > stage3.log 2>&1 &

Evaluation

Model AIME 2024 MATH 500 AMC 2023 Minerva Math OlympiadBench Avg.
Qwen-2.5-7B-Instruct 13.3 79.8 50.6 34.6 40.7 43.8
rStar-Math-7B 26.7 78.4 47.5 - 47.1 -
Eurus-2-7B-PRIME 26.7 79.2 57.8 38.6 42.1 48.9
Qwen2.5-7B-SimpleRL 26.7 82.4 62.5 39.7 43.3 50.9
DeepSeek-R1-Distill-Qwen-1.5B 28.8 82.8 62.9 26.5 43.3 48.9
Still-1.5B 32.5 84.4 66.7 29.0 45.4 51.6
DeepScaleR-1.5B-Preview 43.1 87.8 73.6 30.2 50.0 57.0
DeepScaleR-1.5B-Preview-Reproduce 40.4 87.9 72.0 31.5 50.2 56.4
🎉 DeepScaleR-1.5B-Preview-Reproduce 42.3 88.0 73.2 30.3 49.7 56.7
O1-Preview 40.0 81.4 - - - -

Citation

@misc{deepscaler2025,
  title={DeepScaleR: Surpassing O1-Preview with a 1.5B Model by Scaling RL},
  author={Michael Luo and Sijun Tan and Justin Wong and Xiaoxiang Shi and William Y. Tang and Manan Roongta and Colin Cai and Jeffrey Luo and Tianjun Zhang and Li Erran Li and Raluca Ada Popa and Ion Stoica},
  year={2025},
  howpublished={\url{https://pretty-radio-b75.notion.site/DeepScaleR-Surpassing-O1-Preview-with-a-1-5B-Model-by-Scaling-RL-19681902c1468005bed8ca303013a4e2}},
  note={Notion Blog}
  year={2025}
}
Downloads last month

-

Downloads are not tracked for this model. How to track
Inference Providers NEW
This model isn't deployed by any Inference Provider. 🙋 Ask for provider support

Model tree for ldwang/DeepScaleR-1.5B-Preview-Reproduce

Finetuned
(340)
this model

Datasets used to train ldwang/DeepScaleR-1.5B-Preview-Reproduce