Complex Reasoning Task
Complex reasoning tasks challenge large language models (LLMs) to perform multi-step inferences and solve problems requiring the integration of diverse knowledge and logical operations. Current research focuses on improving LLMs' reasoning abilities through techniques like chain-of-thought prompting, reinforcement learning with refined credit assignment, and the integration of symbolic reasoning methods with neural networks. These advancements aim to enhance the reliability and generalizability of LLMs for applications ranging from scientific discovery and medical diagnosis to automated problem-solving and decision-making, ultimately contributing to a deeper understanding of artificial intelligence and its potential societal impact.
Papers
Quantifying Generalization Complexity for Large Language Models
Zhenting Qi, Hongyin Luo, Xuliang Huang, Zhuokai Zhao, Yibo Jiang, Xiangjun Fan, Himabindu Lakkaraju, James Glass
VinePPO: Unlocking RL Potential For LLM Reasoning Through Refined Credit Assignment
Amirhossein Kazemnejad, Milad Aghajohari, Eva Portelance, Alessandro Sordoni, Siva Reddy, Aaron Courville, Nicolas Le Roux
Iteration of Thought: Leveraging Inner Dialogue for Autonomous Large Language Model Reasoning
Santosh Kumar Radha, Yasamin Nouri Jelyani, Ara Ghukasyan, Oktay Goktas
CodePlan: Unlocking Reasoning Potential in Large Langauge Models by Scaling Code-form Planning
Jiaxin Wen, Jian Guan, Hongning Wang, Wei Wu, Minlie Huang
Enhancing Logical Reasoning in Large Language Models through Graph-based Synthetic Data
Jiaming Zhou, Abbas Ghaddar, Ge Zhang, Liheng Ma, Yaochen Hu, Soumyasundar Pal, Mark Coates, Bin Wang, Yingxue Zhang, Jianye Hao