Papers
arxiv:2505.24362

Knowing Before Saying: LLM Representations Encode Information About Chain-of-Thought Success Before Completion

Published on May 30
· Submitted by anumafzal94 on Jun 4
Authors:
,

Abstract

A classifier based on LLM representations predicts successful zero-shot CoT processes before generating tokens, suggesting that key reasoning information is captured early, and early stopping can improve performance.

AI-generated summary

We investigate whether the success of a zero-shot Chain-of-Thought (CoT) process can be predicted before completion. We discover that a probing classifier, based on LLM representations, performs well even before a single token is generated, suggesting that crucial information about the reasoning process is already present in the initial steps representations. In contrast, a strong BERT-based baseline, which relies solely on the generated tokens, performs worse, likely because it depends on shallow linguistic cues rather than deeper reasoning dynamics. Surprisingly, using later reasoning steps does not always improve classification. When additional context is unhelpful, earlier representations resemble later ones more, suggesting LLMs encode key information early. This implies reasoning can often stop early without loss. To test this, we conduct early stopping experiments, showing that truncating CoT reasoning still improves performance over not using CoT at all, though a gap remains compared to full reasoning. However, approaches like supervised learning or reinforcement learning designed to shorten CoT chains could leverage our classifier's guidance to identify when early stopping is effective. Our findings provide insights that may support such methods, helping to optimize CoT's efficiency while preserving its benefits.

Community

Paper author Paper submitter

Chain-of-Thought is powerful but can be costly. What if there was a way to predict before starting whether an LLM can solve a problem using CoT? We found that a simple probing classifier, based on LLM representations, can accurately predict the success of a zero-shot CoT reasoning process before any tokens are generated. This suggests that crucial information about the reasoning’s outcome is embedded in the initial steps.

The implications? Early stopping of reasoning when success is unlikely could lead to more efficient CoT strategies—saving resources while maintaining performance.

This is an automated message from the Librarian Bot. I found the following papers similar to this paper.

The following papers were recommended by the Semantic Scholar API

Please give a thumbs up to this comment if you found it helpful!

If you want recommendations for any Paper on Hugging Face checkout this Space

You can directly ask Librarian Bot for paper recommendations by tagging it in a comment: @librarian-bot recommend

Sign up or log in to comment

Models citing this paper 0

No model linking this paper

Cite arxiv.org/abs/2505.24362 in a model README.md to link it from this page.

Datasets citing this paper 0

No dataset linking this paper

Cite arxiv.org/abs/2505.24362 in a dataset README.md to link it from this page.

Spaces citing this paper 0

No Space linking this paper

Cite arxiv.org/abs/2505.24362 in a Space README.md to link it from this page.

Collections including this paper 0

No Collection including this paper

Add this paper to a collection to link it from this page.