Paper ID: 2311.07945

Well begun is half done: Importance of Starting Right in Multi-Step Math Reasoning

Kushal Jain, Niket Tandon, Kumar Shridhar

Smaller language models can solve complex reasoning tasks better by learning to generate rationales for their predictions. However, we observe that these smaller models can sometimes struggle to start correctly, but when corrected, can solve a task that they would otherwise have struggled with. We propose two ways in which a smaller model can benefit from initial guidance: 1) asking an LLM for initial guidance, and 2) self-questioning guidance, where the student model can first initiate a question regarding how to start and then continue that chain. We extend initial question-based guidance to a prompting technique called QuestCoT, where starting with a question before a chain of reasoning proves useful. On two multi-step math reasoning datasets GSM8K and SVAMP, we show that starting correctly can lead to a significant performance gain (up to $+14$ points with LLM guidance and $+6$ points with QuestCoT).

Submitted: Nov 14, 2023