Reasoning Capability
Reasoning capability in large language models (LLMs) is a central research area focusing on enhancing their ability to solve complex problems requiring multiple steps and logical inferences. Current research investigates various prompting techniques, such as chain-of-thought prompting and retrieval-augmented generation (RAG), to improve reasoning performance across diverse tasks, including mathematical, logical, and commonsense reasoning, often using benchmarks like GSM8K and its variants. These efforts aim to understand the limitations of current LLMs, which often rely on pattern matching rather than true logical deduction, and to develop more robust and reliable reasoning methods. The ultimate goal is to create LLMs capable of genuine reasoning, impacting fields ranging from scientific discovery to personalized education and decision support systems.
Papers
MAGDi: Structured Distillation of Multi-Agent Interaction Graphs Improves Reasoning in Smaller Language Models
Justin Chih-Yao Chen, Swarnadeep Saha, Elias Stengel-Eskin, Mohit Bansal
K-Level Reasoning: Establishing Higher Order Beliefs in Large Language Models for Strategic Reasoning
Yadong Zhang, Shaoguang Mao, Tao Ge, Xun Wang, Yan Xia, Man Lan, Furu Wei
Reasoning Capacity in Multi-Agent Systems: Limitations, Challenges and Human-Centered Solutions
Pouya Pezeshkpour, Eser Kandogan, Nikita Bhutani, Sajjadur Rahman, Tom Mitchell, Estevam Hruschka