Reasoning Step
Reasoning step research focuses on improving large language models' (LLMs) ability to solve complex problems by breaking them down into a series of intermediate steps. Current efforts concentrate on enhancing the generation and verification of these steps, exploring techniques like chain-of-thought prompting, preference optimization (e.g., Direct Preference Optimization, Step-DPO), and the use of structured representations (e.g., relation tuples, pseudocode). This work is significant because improved multi-step reasoning capabilities are crucial for building more reliable and explainable AI systems across diverse applications, from question answering to mathematical problem-solving.
Papers
Assessing Step-by-Step Reasoning against Lexical Negation: A Case Study on Syllogism
Mengyu Ye, Tatsuki Kuribayashi, Jun Suzuki, Goro Kobayashi, Hiroaki Funayama
CoF-CoT: Enhancing Large Language Models with Coarse-to-Fine Chain-of-Thought Prompting for Multi-domain NLU Tasks
Hoang H. Nguyen, Ye Liu, Chenwei Zhang, Tao Zhang, Philip S. Yu