Update README.md
Browse files
README.md
CHANGED
@@ -103,5 +103,32 @@ language:
|
|
103 |
---
|
104 |
# DNR Bench
|
105 |
|
106 |
-
Don’t
|
107 |
-
problems, leading to excessively long responses.
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
|
103 |
---
|
104 |
# DNR Bench
|
105 |
|
106 |
+
Don’t Reason Bench (DNR Bench), a novel benchmark designed to expose a vulnerability in current RLMs: their tendency to over-reason by attempting to solve unsolvable
|
107 |
+
problems, leading to excessively long responses.
|
108 |
+
|
109 |
+
# Data Summary
|
110 |
+
The DNR Bench dataset contains 150 adversarially crafted prompts divided into five distinct categories:
|
111 |
+
- Imaginary Reference
|
112 |
+
- Indifferent
|
113 |
+
- Math,
|
114 |
+
- Redundant,
|
115 |
+
- Unanswerable.
|
116 |
+
|
117 |
+
Each category targets a specific failure mode observed in reasoning-optimized LLMs, such as hallucinating nonexistent references, failing to remain neutral in ambiguous contexts, incorrectly solving flawed math problems, overanalyzing redundant information, or answering questions that lack sufficient data.
|
118 |
+
|
119 |
+
# Leaderboard
|
120 |
+
This dataset is used to test reasoning LLMs in [DNR Leaderboard on Huggingface](https://huggingface.co/spaces/ServiceNow-AI/Do-not-reason-bench)
|
121 |
+
|
122 |
+
|
123 |
+
# Citation
|
124 |
+
```bibtex
|
125 |
+
@misc{hashemi2025dnrbenchbenchmarkingoverreasoning,
|
126 |
+
title={DNR Bench: Benchmarking Over-Reasoning in Reasoning LLMs},
|
127 |
+
author={Masoud Hashemi and Oluwanifemi Bamgbose and Sathwik Tejaswi Madhusudhan and Jishnu Sethumadhavan Nair and Aman Tiwari and Vikas Yadav},
|
128 |
+
year={2025},
|
129 |
+
eprint={2503.15793},
|
130 |
+
archivePrefix={arXiv},
|
131 |
+
primaryClass={cs.LG},
|
132 |
+
url={https://arxiv.org/abs/2503.15793},
|
133 |
+
}
|
134 |
+
```
|