Reasoning Capability
Reasoning capability in large language models (LLMs) is a central research area focusing on enhancing their ability to solve complex problems requiring multiple steps and logical inferences. Current research investigates various prompting techniques, such as chain-of-thought prompting and retrieval-augmented generation (RAG), to improve reasoning performance across diverse tasks, including mathematical, logical, and commonsense reasoning, often using benchmarks like GSM8K and its variants. These efforts aim to understand the limitations of current LLMs, which often rely on pattern matching rather than true logical deduction, and to develop more robust and reliable reasoning methods. The ultimate goal is to create LLMs capable of genuine reasoning, impacting fields ranging from scientific discovery to personalized education and decision support systems.
Papers
RealCQA-V2 : Visual Premise Proving
Saleem Ahmed, Rangaraj Setlur, Venu Govindaraju
Let's Be Self-generated via Step by Step: A Curriculum Learning Approach to Automated Reasoning with Large Language Models
Kangyang Luo, Zichen Ding, Zhenmin Weng, Lingfeng Qiao, Meng Zhao, Xiang Li, Di Yin, Jinlong Shu
A Theoretical Understanding of Chain-of-Thought: Coherent Reasoning and Error-Aware Demonstration
Yingqian Cui, Pengfei He, Xianfeng Tang, Qi He, Chen Luo, Jiliang Tang, Yue Xing
Rulebreakers Challenge: Revealing a Blind Spot in Large Language Models' Reasoning with Formal Logic
Jason Chan, Robert Gaizauskas, Zhixue Zhao
"Let's Argue Both Sides": Argument Generation Can Force Small Models to Utilize Previously Inaccessible Reasoning Capabilities
Kaveh Eskandari Miandoab, Vasanth Sarathy
Pyramid-Driven Alignment: Pyramid Principle Guided Integration of Large Language Models and Knowledge Graphs
Lei Sun, Xinchen Wang, Youdi Li
Exploiting LLMs' Reasoning Capability to Infer Implicit Concepts in Legal Information Retrieval
Hai-Long Nguyen, Tan-Minh Nguyen, Duc-Minh Nguyen, Thi-Hai-Yen Vuong, Ha-Thanh Nguyen, Xuan-Hieu Phan