s1-0.5B / README.md
2stacks's picture
Update README.md
9c09911 verified
---
pipeline_tag: text-generation
inference: true
license: apache-2.0
datasets:
- simplescaling/s1K
base_model:
- Qwen/Qwen2.5-0.5B-Instruct
library_name: transformers
language:
- ar
- de
- en
- es
- fr
- it
- ja
- ko
- pt
- ru
- th
- vi
- zh
---
# Model Summary
> s1-0.5B is a reasoning model finetuned from Qwen2.5-0.5B-Instruct on just 1,000 examples. This model was created simply to test the process used to train the original S1 cited below using consumer grade GPUs.
- **Repository:** [simplescaling/s1](https://github.com/simplescaling/s1)
- **Paper:** https://arxiv.org/abs/2501.19393
# Use
The model usage is documented [here](https://github.com/simplescaling/s1?tab=readme-ov-file#inference).
# Citation
```bibtex
@misc{muennighoff2025s1simpletesttimescaling,
title={s1: Simple test-time scaling},
author={Niklas Muennighoff and Zitong Yang and Weijia Shi and Xiang Lisa Li and Li Fei-Fei and Hannaneh Hajishirzi and Luke Zettlemoyer and Percy Liang and Emmanuel Candès and Tatsunori Hashimoto},
year={2025},
eprint={2501.19393},
archivePrefix={arXiv},
primaryClass={cs.CL},
url={https://arxiv.org/abs/2501.19393},
}
```