Paper ID: 2411.11371

Rethinking Thinking Tokens: Understanding Why They Underperform in Practice

Sreeram Vennam, David Valente, David Herel, Ponnurangam Kumaraguru

Thinking Tokens (TT) have been proposed as an unsupervised method to facilitate reasoning in language models. However, despite their conceptual appeal, our findings show that TTs marginally improves performance and consistently underperforms compared to Chain-of-Thought (CoT) reasoning across multiple benchmarks. We hypothesize that this underperformance stems from the reliance on a single embedding for TTs, which results in inconsistent learning signals and introduces noisy gradients. This paper provides a comprehensive empirical analysis to validate this hypothesis and discusses the implications for future research on unsupervised reasoning in LLMs.

Submitted: Nov 18, 2024