We have released a paper for OpenThoughts! See our paper here.

OpenThinker2-32B

This model is a fine-tuned version of Qwen/Qwen2.5-32B-Instruct on the OpenThoughts2-1M dataset.

The OpenThinker2-32B model is the highest performing open-data model. This model improves upon our previous OpenThinker-32B model, which was trained on 114k examples from OpenThoughts-114k. The numbers reported in the table below are evaluated with our open-source tool Evalchemy.

Model Data AIME24 AIME25 AMC23 MATH500 GPQA-D LCBv2
OpenThinker2-32B 76.7 58.7 94.0 90.8 64.1 72.5
OpenThinker-32B 68.0 49.3 95.5 90.6 63.5 68.6
DeepSeek-R1-Distill-Qwen-32B 74.7 50.0 96.5 90.0 65.8 72.3
Light-R1-32B 74.7 58.0 96.0 90.4 62.0 56.0
S1.1-32B 59.3 42.7 91.5 87.4 62.0 58.7

Data

This model was trained on the OpenThoughts2-1M dataset.

The OpenThoughts2-1M dataset was constructed by augmenting OpenThoughts-114k with existing datasets like OpenR1, as well as additional math and code reasoning data. We generate the additional math and code data by ablating over 26 different question generation methodologies and sampling from the highest performing ones.

See the OpenThoughts2-1M dataset page or our blog post for additional information.

Intended uses & limitations

Apache 2.0 License

Training procedure

We used 128 4xA100 nodes to train the model for 50 hours.

Training hyperparameters

The following hyperparameters were used during training:

  • learning_rate: 8e-05
  • seed: 42
  • distributed_type: multi-GPU
  • num_devices: 512
  • gradient_accumulation_steps: 1
  • total_train_batch_size: 512
  • optimizer: Use OptimizerNames.ADAMW_TORCH with betas=(0.9,0.999) and epsilon=1e-08 and optimizer_args=No additional optimizer arguments
  • lr_scheduler_type: cosine
  • lr_scheduler_warmup_ratio: 0.1
  • num_epochs: 5.0

Framework versions

  • Transformers 4.46.1
  • Pytorch 2.3.0
  • Datasets 3.1.0
  • Tokenizers 0.20.3

More info can be found in our repository: https://github.com/open-thoughts/open-thoughts.

Links

Citation

@misc{guha2025openthoughtsdatarecipesreasoning,
  title={OpenThoughts: Data Recipes for Reasoning Models}, 
  author={Etash Guha and Ryan Marten and Sedrick Keh and Negin Raoof and Georgios Smyrnis and Hritik Bansal and Marianna Nezhurina and Jean Mercat and Trung Vu and Zayne Sprague and Ashima Suvarna and Benjamin Feuer and Liangyu Chen and Zaid Khan and Eric Frankel and Sachin Grover and Caroline Choi and Niklas Muennighoff and Shiye Su and Wanjia Zhao and John Yang and Shreyas Pimpalgaonkar and Kartik Sharma and Charlie Cheng-Jie Ji and Yichuan Deng and Sarah Pratt and Vivek Ramanujan and Jon Saad-Falcon and Jeffrey Li and Achal Dave and Alon Albalak and Kushal Arora and Blake Wulfe and Chinmay Hegde and Greg Durrett and Sewoong Oh and Mohit Bansal and Saadia Gabriel and Aditya Grover and Kai-Wei Chang and Vaishaal Shankar and Aaron Gokaslan and Mike A. Merrill and Tatsunori Hashimoto and Yejin Choi and Jenia Jitsev and Reinhard Heckel and Maheswaran Sathiamoorthy and Alexandros G. Dimakis and Ludwig Schmidt},
  year={2025},
  eprint={2506.04178},
  archivePrefix={arXiv},
  primaryClass={cs.LG},
  url={https://arxiv.org/abs/2506.04178}, 
}
Downloads last month
1,247
Safetensors
Model size
32.8B params
Tensor type
BF16
·
Inference Providers NEW
The selected billing account doesn't have any compatible Inference Provider enabled for this model. Settings

Model tree for open-thoughts/OpenThinker2-32B

Base model

Qwen/Qwen2.5-32B
Finetuned
(212)
this model
Adapters
2 models
Finetunes
1 model
Merges
3 models
Quantizations
11 models

Dataset used to train open-thoughts/OpenThinker2-32B

Collection including open-thoughts/OpenThinker2-32B