Complex Reasoning Task
Complex reasoning tasks challenge large language models (LLMs) to perform multi-step inferences and solve problems requiring the integration of diverse knowledge and logical operations. Current research focuses on improving LLMs' reasoning abilities through techniques like chain-of-thought prompting, reinforcement learning with refined credit assignment, and the integration of symbolic reasoning methods with neural networks. These advancements aim to enhance the reliability and generalizability of LLMs for applications ranging from scientific discovery and medical diagnosis to automated problem-solving and decision-making, ultimately contributing to a deeper understanding of artificial intelligence and its potential societal impact.
Papers
Just Ask One More Time! Self-Agreement Improves Reasoning of Language Models in (Almost) All Scenarios
Lei Lin, Jiayi Fu, Pengli Liu, Qingyang Li, Yan Gong, Junchen Wan, Fuzheng Zhang, Zhongyuan Wang, Di Zhang, Kun Gai
Towards Reasoning in Large Language Models via Multi-Agent Peer Review Collaboration
Zhenran Xu, Senbao Shi, Baotian Hu, Jindi Yu, Dongfang Li, Min Zhang, Yuxiang Wu
Well begun is half done: Importance of Starting Right in Multi-Step Math Reasoning
Kushal Jain, Niket Tandon, Kumar Shridhar
Encouraging Divergent Thinking in Large Language Models through Multi-Agent Debate
Tian Liang, Zhiwei He, Wenxiang Jiao, Xing Wang, Yan Wang, Rui Wang, Yujiu Yang, Shuming Shi, Zhaopeng Tu
Dissecting Chain-of-Thought: Compositionality through In-Context Filtering and Learning
Yingcong Li, Kartik Sreenivasan, Angeliki Giannou, Dimitris Papailiopoulos, Samet Oymak