Complex Reasoning Task
Complex reasoning tasks challenge large language models (LLMs) to perform multi-step inferences and solve problems requiring the integration of diverse knowledge and logical operations. Current research focuses on improving LLMs' reasoning abilities through techniques like chain-of-thought prompting, reinforcement learning with refined credit assignment, and the integration of symbolic reasoning methods with neural networks. These advancements aim to enhance the reliability and generalizability of LLMs for applications ranging from scientific discovery and medical diagnosis to automated problem-solving and decision-making, ultimately contributing to a deeper understanding of artificial intelligence and its potential societal impact.
Papers
Iteration of Thought: Leveraging Inner Dialogue for Autonomous Large Language Model Reasoning
Santosh Kumar Radha, Yasamin Nouri Jelyani, Ara Ghukasyan, Oktay Goktas
CodePlan: Unlocking Reasoning Potential in Large Langauge Models by Scaling Code-form Planning
Jiaxin Wen, Jian Guan, Hongning Wang, Wei Wu, Minlie Huang
Enhancing Logical Reasoning in Large Language Models through Graph-based Synthetic Data
Jiaming Zhou, Abbas Ghaddar, Ge Zhang, Liheng Ma, Yaochen Hu, Soumyasundar Pal, Mark Coates, Bin Wang, Yingxue Zhang, Jianye Hao
CoverBench: A Challenging Benchmark for Complex Claim Verification
Alon Jacovi, Moran Ambar, Eyal Ben-David, Uri Shaham, Amir Feder, Mor Geva, Dror Marcus, Avi Caciularu
Unveiling Factual Recall Behaviors of Large Language Models through Knowledge Neurons
Yifei Wang, Yuheng Chen, Wanting Wen, Yu Sheng, Linjing Li, Daniel Dajun Zeng