Multi Step Reasoning
Multi-step reasoning research focuses on enhancing the ability of large language models (LLMs) to solve complex problems requiring multiple sequential steps of inference. Current efforts concentrate on improving LLMs' ability to plan, execute, and verify these steps, often employing techniques like chain-of-thought prompting, structured planning with world models, and the integration of external tools or knowledge graphs. This research is crucial for advancing AI capabilities in various fields, from automated problem-solving and decision-making to more sophisticated question answering and improved human-computer interaction. The development of robust benchmarks and evaluation metrics is also a key focus, enabling more rigorous comparison and progress tracking of different approaches.
Papers
Towards Trustworthy Knowledge Graph Reasoning: An Uncertainty Aware Perspective
Bo Ni, Yu Wang, Lu Cheng, Erik Blasch, Tyler Derr
Transformers Provably Solve Parity Efficiently with Chain of Thought
Juno Kim, Taiji Suzuki
Exploring the Role of Reasoning Structures for Constructing Proofs in Multi-Step Natural Language Reasoning with Large Language Models
Zi'ou Zheng, Christopher Malon, Martin Renqiang Min, Xiaodan Zhu