Reasoning Capability
Reasoning capability in large language models (LLMs) is a central research area focusing on enhancing their ability to solve complex problems requiring multiple steps and logical inferences. Current research investigates various prompting techniques, such as chain-of-thought prompting and retrieval-augmented generation (RAG), to improve reasoning performance across diverse tasks, including mathematical, logical, and commonsense reasoning, often using benchmarks like GSM8K and its variants. These efforts aim to understand the limitations of current LLMs, which often rely on pattern matching rather than true logical deduction, and to develop more robust and reliable reasoning methods. The ultimate goal is to create LLMs capable of genuine reasoning, impacting fields ranging from scientific discovery to personalized education and decision support systems.
Papers
Pyramid-Driven Alignment: Pyramid Principle Guided Integration of Large Language Models and Knowledge Graphs
Lei Sun, Xinchen Wang, Youdi Li
Exploiting LLMs' Reasoning Capability to Infer Implicit Concepts in Legal Information Retrieval
Hai-Long Nguyen, Tan-Minh Nguyen, Duc-Minh Nguyen, Thi-Hai-Yen Vuong, Ha-Thanh Nguyen, Xuan-Hieu Phan
Think Beyond Size: Adaptive Prompting for More Effective Reasoning
Kamesh R
Can Knowledge Graphs Make Large Language Models More Trustworthy? An Empirical Study over Open-ended Question Answering
Yuan Sui, Yufei He, Zifeng Ding, Bryan Hooi
Teaching-Inspired Integrated Prompting Framework: A Novel Approach for Enhancing Reasoning in Large Language Models
Wenting Tan, Dongxiao Chen, Jieting Xue, Zihao Wang, Taijie Chen
Deep Learning for Generalised Planning with Background Knowledge
Dillon Z. Chen, Rostislav Horčík, Gustav Šír
Enhancing Language Model Reasoning via Weighted Reasoning in Self-Consistency
Tim Knappe, Ryan Li, Ayush Chauhan, Kaylee Chhua, Kevin Zhu, Sean O'Brien
Enhance Reasoning by Learning from Mistakes: Peer-Review Knowledge Distillation from Multiple Large Language Models
Zhuochun Li, Yuelyu Ji, Rui Meng, Daqing He
System 2 Reasoning Capabilities Are Nigh
Scott C. Lowe
Understanding Reasoning in Chain-of-Thought from the Hopfieldian View
Lijie Hu, Liang Liu, Shu Yang, Xin Chen, Zhen Tan, Muhammad Asif Ali, Mengdi Li, Di Wang
Deliberate Reasoning for LLMs as Structure-aware Planning with Accurate World Model
Siheng Xiong, Ali Payani, Yuan Yang, Faramarz Fekri