Safetensors
qwen2
LRM
hybrid_reasoning
efficient_reasoning
NeoZ123 commited on
Commit
6f7e284
·
verified ·
1 Parent(s): dae4e3e

Create README.md

Browse files
Files changed (1) hide show
  1. README.md +87 -0
README.md ADDED
@@ -0,0 +1,87 @@
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
1
+ ---
2
+ license: mit
3
+ datasets:
4
+ - agentica-org/DeepScaleR-Preview-Dataset
5
+ base_model:
6
+ - deepseek-ai/DeepSeek-R1-Distill-Qwen-1.5B
7
+ tags:
8
+ - LRM
9
+ - hybrid_reasoning
10
+ - efficient_reasoning
11
+ ---
12
+
13
+ # AdaptThink: LLM Can Learn When to Think
14
+
15
+ <p align="center">
16
+ 🤗 <a href="https://huggingface.co/collections/THU-KEG/adaptthink-682a1059aa9f5102c4fa0470" target="_blank">HF Collections</a> • 💻 <a href="" target="_blank">Github Repo</a> • 📃 <a href="https://arxiv.org/abs/2505.13417" target="_blank">Paper</a>
17
+ </p>
18
+
19
+ ## 🔍 Table of Contents
20
+ - [🤖️ AdaptThink](#adapt_think)
21
+ - [⚙️ Released Models](#model)
22
+ - [📊 Evaluation](#evaluation)
23
+ - [📝 Citation](#citation)
24
+
25
+ <a name="adapt_think"></a>
26
+ ## 🤖️ AdaptThink
27
+ We present **AdapThink**, a novel reinforcement learning (RL) algorithm that enables reasoning models to adaptively choose between **Thinking** and **NoThinking** modes according to the difficulty of each input problem, thereby achieving automatic hybrid reasoning. Specifically, the model engages in thinking only when the problem is determined to be challenging; for other simple question, it will bypass the thinking process and directly produce a concise final solution. This approach substantially reduces inference costs while further improving overall performance.
28
+
29
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66cdd285c51a915bd5f2d017/JaeJiBwLkcwAuexRAkLX5.png)
30
+
31
+
32
+
33
+ <a name="model"></a>
34
+ ## ⚙️ Released Models
35
+
36
+ ### All Available Datasets and Models
37
+ We apply the AdaptThink algorithm on DeepSeek-R1-Distill-Qwen-1.5B with $\delta$ from 0 to 0.1, and DeepSeek-R1-Distill-Qwen-7B with $\delta=0.05$. A larger $\large$ results in a higher proportion of NoThinking responses, which reduces more inference costs but also diminish the resultant improvement in accuracy.
38
+
39
+ All the trained models are available on HuggingFace.
40
+
41
+
42
+ | Name | HF Repo |
43
+ |---|---|
44
+ | AdaptThink-1.5B-delta0 | [🤗 HF Repo](https://huggingface.co/THU-KEG/AdaptThink-1.5B-delta0) |
45
+ | AdaptThink-1.5B-delta0.01 | [🤗 HF Repo](https://huggingface.co/THU-KEG/AdaptThink-1.5B-delta0.01) |
46
+ | AdaptThink-1.5B-delta0.02 | [🤗 HF Repo](https://huggingface.co/THU-KEG/AdaptThink-1.5B-delta0.02) |
47
+ | AdaptThink-1.5B-delta0.05 | [🤗 HF Repo](https://huggingface.co/THU-KEG/AdaptThink-1.5B-delta0.05) |
48
+ | AdaptThink-1.5B-delta0.075 | [🤗 HF Repo](https://huggingface.co/THU-KEG/AdaptThink-1.5B-delta0.075) |
49
+ | AdaptThink-1.5B-delta0.1 | [🤗 HF Repo](https://huggingface.co/THU-KEG/AdaptThink-1.5B-delta0.1) |
50
+ | AdaptThink-7B-delta0.05 | [🤗 HF Repo](https://huggingface.co/THU-KEG/AdaptThink-7B-delta0.05) |
51
+
52
+ <a name="training"></a>
53
+
54
+ ## 📊 Evaluation Results
55
+
56
+ We list our evaluation results as follows:
57
+ ##### 1. Comparison with existing methods for efficient reasoning on mathematics datasets
58
+
59
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66cdd285c51a915bd5f2d017/ZLV8ZfEet1dp-4jyzBxiG.png)
60
+
61
+ ##### 2. Nothinking responses ratio and accuracy across different difficulty levels on MATH500
62
+
63
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66cdd285c51a915bd5f2d017/GUNfW9qO2aaT9_lo1XXPf.png)
64
+
65
+ ##### 3. Comparison of different $\delta$ values
66
+
67
+ ![image/png](https://cdn-uploads.huggingface.co/production/uploads/66cdd285c51a915bd5f2d017/RXrXwxVSAYlR3-_t0GUwV.png)
68
+
69
+ ##### 4. Evaluation results on MMLU
70
+
71
+ <img width="1000" alt="image" src="https://cdn-uploads.huggingface.co/production/uploads/66cdd285c51a915bd5f2d017/19K2u6PNmYz3gx3JnHgn4.png">
72
+
73
+ <a name="citation"></a>
74
+ ## 📝 Citation
75
+
76
+ If you find our work useful, please consider citing LongReward:
77
+
78
+ ```
79
+ @article{zhang2025adapt_think,
80
+ title = {AdaptThink: LLM Can Learn When to Think}
81
+ author={Jiajie Zhang and Nianyi Lin and Lei Hou and Ling Feng and Juanzi Li},
82
+ journal={arXiv preprint arXiv: 2505.13417},
83
+ url={https://arxiv.org/abs/2505.13417}
84
+ year={2025}
85
+ }
86
+ ```
87
+