Multi Step Reasoning
Multi-step reasoning research focuses on enhancing the ability of large language models (LLMs) to solve complex problems requiring multiple sequential steps of inference. Current efforts concentrate on improving LLMs' ability to plan, execute, and verify these steps, often employing techniques like chain-of-thought prompting, structured planning with world models, and the integration of external tools or knowledge graphs. This research is crucial for advancing AI capabilities in various fields, from automated problem-solving and decision-making to more sophisticated question answering and improved human-computer interaction. The development of robust benchmarks and evaluation metrics is also a key focus, enabling more rigorous comparison and progress tracking of different approaches.
Papers
Empowering Multi-step Reasoning across Languages via Tree-of-Thoughts
Leonardo Ranaldi, Giulia Pucci, Federico Ranaldi, Elena Sofia Ruzzetti, Fabio Massimo Zanzotto
The ART of LLM Refinement: Ask, Refine, and Trust
Kumar Shridhar, Koustuv Sinha, Andrew Cohen, Tianlu Wang, Ping Yu, Ram Pasunuru, Mrinmaya Sachan, Jason Weston, Asli Celikyilmaz
CoF-CoT: Enhancing Large Language Models with Coarse-to-Fine Chain-of-Thought Prompting for Multi-domain NLU Tasks
Hoang H. Nguyen, Ye Liu, Chenwei Zhang, Tao Zhang, Philip S. Yu
Towards a Mechanistic Interpretation of Multi-Step Reasoning Capabilities of Language Models
Yifan Hou, Jiaoda Li, Yu Fei, Alessandro Stolfo, Wangchunshu Zhou, Guangtao Zeng, Antoine Bosselut, Mrinmaya Sachan