Reasoning Path
Reasoning paths in large language models (LLMs) represent the sequence of steps a model takes to solve a problem, mirroring human thought processes. Current research focuses on improving the quality and efficiency of these paths through techniques like multi-agent systems, tree-based search algorithms (e.g., Tree of Thoughts), and methods that dynamically adjust the reasoning process based on task complexity and model confidence. This work is significant because enhanced reasoning paths lead to more accurate, reliable, and efficient LLM performance across diverse applications, from question answering and knowledge graph reasoning to complex problem-solving in robotics and other domains.
Papers
Plan-on-Graph: Self-Correcting Adaptive Planning of Large Language Model on Knowledge Graphs
Liyi Chen, Panrong Tong, Zhongming Jin, Ying Sun, Jieping Ye, Hui Xiong
Can Language Models Perform Robust Reasoning in Chain-of-thought Prompting with Noisy Rationales?
Zhanke Zhou, Rong Tao, Jianing Zhu, Yiwen Luo, Zengmao Wang, Bo Han
Graph-constrained Reasoning: Faithful Reasoning on Knowledge Graphs with Large Language Models
Linhao Luo, Zicheng Zhao, Chen Gong, Gholamreza Haffari, Shirui Pan
Not All Votes Count! Programs as Verifiers Improve Self-Consistency of Language Models for Math Reasoning
Vernon Y.H. Toh, Deepanway Ghosal, Soujanya Poria
The Role of Deductive and Inductive Reasoning in Large Language Models
Chengkun Cai, Xu Zhao, Haoliang Liu, Zhongyu Jiang, Tianfang Zhang, Zongkai Wu, Jenq-Neng Hwang, Lei Li
ReGenesis: LLMs can Grow into Reasoning Generalists via Self-Improvement
Xiangyu Peng, Congying Xia, Xinyi Yang, Caiming Xiong, Chien-Sheng Wu, Chen Xing