image
imagewidth (px) 1.15k
4.67k
|
---|
EasyR1: An Efficient, Scalable, Multi-Modality RL Training Framework
This project is a clean fork of the original veRL project to support vision language models, we thank all the authors for providing such a high-performance RL training framework.
EasyR1 is efficient and scalable due to the design of HybirdEngine and the latest release of vLLM's SPMD mode.
Features
Supported models
- Llama3/Qwen2/Qwen2.5 language models
- Qwen2/Qwen2.5-VL vision language models
- DeepSeek-R1 distill models
Supported algorithms
- GRPO
- Reinforce++
- ReMax
- RLOO
Supported datasets
- Any text, vision-text dataset in a specific format
Supported tricks
- Padding-free training
- Resuming from checkpoint
- Wandb & SwanLab & Mlflow & Tensorboard tracking
Requirements
Software Requirements
- Python 3.9+
- transformers>=4.49.0
- flash-attn>=2.4.3
- vllm>=0.7.3
We provide a Dockerfile to easily build environments.
We recommend using the pre-built docker image in EasyR1.
# stable
docker pull hiyouga/verl:ngc-th2.5.1-cu120-vllm0.7.4-hotfix
# nightly
docker pull hiyouga/verl:ngc-th2.6.0-cu120-vllm0.8.2
Hardware Requirements
* estimated
Method | Bits | 1.5B | 3B | 7B | 32B |
---|---|---|---|---|---|
GRPO Full Fine-Tuning | AMP | 2*24GB | 4*40GB | 8*40GB | 16*80GB |
GRPO Full Fine-Tuning | BF16 | 1*24GB | 1*40GB | 4*40GB | 8*80GB |
Use
worker.actor.fsdp.torch_dtype=bf16
andworker.actor.optim.strategy=adamw_bf16
to enable bf16 training.We are working hard to reduce the VRAM in RL training, LoRA support will be integrated in next updates.
Tutorial: Run Qwen2.5-VL GRPO on Geometry3K Dataset in Just 3 Steps
Installation
git clone https://github.com/hiyouga/EasyR1.git
cd EasyR1
pip install -e .
GRPO Training
bash examples/qwen2_5_vl_7b_geo3k_grpo.sh
Merge Checkpoint in Hugging Face Format
python3 scripts/model_merger.py --local_dir checkpoints/easy_r1/exp_name/global_step_1/actor
If you encounter issues with connecting to Hugging Face, consider using
export HF_ENDPOINT=https://hf-mirror.com
.If you want to use SwanLab logger, consider using
bash examples/qwen2_5_vl_7b_geo3k_swanlab.sh
.
Custom Dataset
Please refer to the example datasets to prepare your own dataset.
- Text dataset: https://huggingface.co/datasets/hiyouga/math12k
- Vision-text dataset: https://huggingface.co/datasets/hiyouga/geometry3k
EasyR1 already supports multi-image dataset.
How to Understand GRPO in EasyR1
- To learn about the GRPO algorithm, you can refer to Hugging Face's blog.
How to Run 70B+ Model in Multi-node Environment
Please see the veRL's official doc for multi-node training and Ray debugger.
Other Baselines
We also reproduced the following two baselines of the R1-V project.
- CLEVR-70k-Counting: Train the Qwen2.5-VL-3B-Instruct model on counting problem.
- GeoQA-8k: Train the Qwen2.5-VL-3B-Instruct model on GeoQA problem.
Awesome Work using EasyR1
- MMR1: Advancing the Frontiers of Multimodal Reasoning. [
](https://github.com/LengSicong/MMR1)
- Vision-R1: Incentivizing Reasoning Capability in Multimodal Large Language Models. [
](https://github.com/Osilly/Vision-R1) [
](https://arxiv.org/abs/2503.06749)
- Seg-Zero: Reasoning-Chain Guided Segmentation via Cognitive Reinforcement. [
](https://github.com/dvlab-research/Seg-Zero) [
](https://arxiv.org/abs/2503.06520)
- MetaSpatial: Reinforcing 3D Spatial Reasoning in VLMs for the Metaverse. [
](https://github.com/PzySeere/MetaSpatial) [
](https://arxiv.org/abs/2503.18470)
TODO
- Support LoRA (high priority).
- Support ulysses parallelism for VLMs (middle priority).
- Support more VLM architectures.
We will not provide scripts for supervised fine-tuning and inference in this project. If you have such requirements, we recommend using LLaMA-Factory.
Known bugs
These features are temporarily disabled for now, we plan to fix them one-by-one in the future updates.
- Vision language models are not compatible with ulysses parallelism yet.
Discussion Group
👋 Join our WeChat group.
Citation
Core contributors: Yaowei Zheng, Junting Lu, Shenzhi Wang, Zhangchi Feng, Dongdong Kuang and Yuwen Xiong
We also thank Guangming Sheng and Chi Zhang for helpful discussions.
@misc{zheng2025easyr1,
title = {EasyR1: An Efficient, Scalable, Multi-Modality RL Training Framework},
author = {Yaowei Zheng, Junting Lu, Shenzhi Wang, Zhangchi Feng, Dongdong Kuang, Yuwen Xiong},
howpublished = {\url{https://github.com/hiyouga/EasyR1}},
year = {2025}
}
We recommend to also cite the original work.
@article{sheng2024hybridflow,
title = {HybridFlow: A Flexible and Efficient RLHF Framework},
author = {Guangming Sheng and Chi Zhang and Zilingfeng Ye and Xibin Wu and Wang Zhang and Ru Zhang and Yanghua Peng and Haibin Lin and Chuan Wu},
year = {2024},
journal = {arXiv preprint arXiv: 2409.19256}
}
- Downloads last month
- 227