OThink-R1: Intrinsic Fast/Slow Thinking Mode Switching for Over-Reasoning Mitigation
Abstract
OThink-R1 is introduced to reduce reasoning redundancy in complex problem-solving by classifying reasoning steps as essential or redundant and dynamically switching thinking modes based on task complexity.
Recent advanced large reasoning models (LRMs) leverage extended chain-of-thought (CoT) reasoning to solve complex tasks, achieving state-of-the-art performance. Despite their success, we identify a critical issue: a substantial portion of simple tasks solved by LRMs can also be addressed by non-reasoning LLMs using significantly fewer tokens, indicating the complex reasoning may not always be necessary. To address this, we systematically analyze the reasoning trajectories of LRMs and present a method utilizing identified paradigms and LLM-Judge to classify these trajectories as either Redundant Reasoning or Essential Reasoning. And we introduce OThink-R1, a method that prunes redundant reasoning steps while preserving logical validity. OThink-R1 dynamically employs the non-thinking mode (fast-thinking) for straightforward problems while engaging in deliberate thinking (slow-thinking) for complex problems. Experiments across mathematical and question-answering tasks demonstrate that OThink-R1 reduces reasoning redundancy by almost 23\% on average without compromising accuracy, offering practical guidelines for efficient reasoning models. The code is available at https://github.com/AgenticIR-Lab/OThink-R1.
Community
OThink-R1 provides a framework that enables LLMs conduct hybrid reasoning modes, i.e., fast thinking (non-thinking)or slow thinking.
Code: https://github.com/AgenticIR-Lab/OThink-R1
arxiv: https://arxiv.org/abs/2506.02397
This is an automated message from the Librarian Bot. I found the following papers similar to this paper.
The following papers were recommended by the Semantic Scholar API
- Learning When to Think: Shaping Adaptive Reasoning in R1-Style Models via Multi-Stage RL (2025)
- When to Continue Thinking: Adaptive Thinking Mode Switching for Efficient Reasoning (2025)
- Done Is Better than Perfect: Unlocking Efficient Reasoning by Structured Multi-Turn Decomposition (2025)
- Adaptive Deep Reasoning: Triggering Deep Thinking When Needed (2025)
- Don't Think Longer, Think Wisely: Optimizing Thinking Dynamics for Large Reasoning Models (2025)
- Fast-Slow Thinking for Large Vision-Language Model Reasoning (2025)
- PATS: Process-Level Adaptive Thinking Mode Switching (2025)
Please give a thumbs up to this comment if you found it helpful!
If you want recommendations for any Paper on Hugging Face checkout this Space
You can directly ask Librarian Bot for paper recommendations by tagging it in a comment:
@librarian-bot
recommend
Models citing this paper 0
No model linking this paper
Datasets citing this paper 0
No dataset linking this paper
Spaces citing this paper 0
No Space linking this paper