Complex Reasoning Task
Complex reasoning tasks challenge large language models (LLMs) to perform multi-step inferences and solve problems requiring the integration of diverse knowledge and logical operations. Current research focuses on improving LLMs' reasoning abilities through techniques like chain-of-thought prompting, reinforcement learning with refined credit assignment, and the integration of symbolic reasoning methods with neural networks. These advancements aim to enhance the reliability and generalizability of LLMs for applications ranging from scientific discovery and medical diagnosis to automated problem-solving and decision-making, ultimately contributing to a deeper understanding of artificial intelligence and its potential societal impact.
Papers
Path-of-Thoughts: Extracting and Following Paths for Robust Relational Reasoning with Large Language Models
Ge Zhang, Mohammad Ali Alomrani, Hongjian Gu, Jiaming Zhou, Yaochen Hu, Bin Wang, Qun Liu, Mark Coates, Yingxue Zhang, Jianye Hao
Deliberation in Latent Space via Differentiable Cache Augmentation
Luyang Liu, Jonas Pfeiffer, Jiaxing Wu, Jun Xie, Arthur Szlam
B-STaR: Monitoring and Balancing Exploration and Exploitation in Self-Taught Reasoners
Weihao Zeng, Yuzhen Huang, Lulu Zhao, Yijun Wang, Zifei Shan, Junxian He
Are Your LLMs Capable of Stable Reasoning?
Junnan Liu, Hongwei Liu, Linchen Xiao, Ziyi Wang, Kuikun Liu, Songyang Gao, Wenwei Zhang, Songyang Zhang, Kai Chen
RAG-Star: Enhancing Deliberative Reasoning with Retrieval Augmented Verification and Refinement
Jinhao Jiang, Jiayi Chen, Junyi Li, Ruiyang Ren, Shijie Wang, Wayne Xin Zhao, Yang Song, Tao Zhang