LLM Reasoning
Research on Large Language Model (LLM) reasoning focuses on improving the ability of LLMs to perform complex, multi-step reasoning tasks, often by augmenting them with techniques like chain-of-thought prompting, reinforcement learning (RL), and integration with symbolic reasoning methods. Current efforts concentrate on enhancing the accuracy and reliability of LLM reasoning, addressing issues like hallucination and inconsistent performance across different domains and tasks, often through improved credit assignment in RL and the development of novel evaluation metrics. These advancements are significant because reliable LLM reasoning is crucial for building trustworthy AI systems across diverse applications, from robotics and healthcare to scientific discovery and decision support.
Papers
The CLRS-Text Algorithmic Reasoning Language Benchmark
Larisa Markeeva, Sean McLeish, Borja Ibarz, Wilfried Bounsi, Olga Kozlova, Alex Vitvitskyi, Charles Blundell, Tom Goldstein, Avi Schwarzschild, Petar Veličković
ReST-MCTS*: LLM Self-Training via Process Reward Guided Tree Search
Dan Zhang, Sining Zhoubian, Ziniu Hu, Yisong Yue, Yuxiao Dong, Jie Tang
Less but Better: Enabling Generalized Zero-shot Learning Towards Unseen Domains by Intrinsic Learning from Redundant LLM Semantics
Jiaqi Yue, Jiancheng Zhao, Chunhui Zhao
ERD: A Framework for Improving LLM Reasoning for Cognitive Distortion Classification
Sehee Lim, Yejin Kim, Chi-Hyun Choi, Jy-yong Sohn, Byung-Hoon Kim