Complex Reasoning
Complex reasoning in artificial intelligence focuses on developing models capable of multi-step, logical inference and problem-solving, mirroring human cognitive abilities. Current research emphasizes improving large language models (LLMs) through techniques like chain-of-thought prompting, retrieval-augmented generation (RAG), and the integration of symbolic reasoning with neural networks, often incorporating multi-modal data (e.g., visual and textual information). These advancements are significant for enhancing the reliability and applicability of AI systems across diverse fields, including autonomous driving, robotics, and scientific discovery, by enabling more robust and accurate decision-making in complex scenarios.
Papers
Can Large Language Models Act as Symbolic Reasoners?
Rob Sullivan, Nelly Elsayed
Belief in the Machine: Investigating Epistemological Blind Spots of Language Models
Mirac Suzgun, Tayfun Gur, Federico Bianchi, Daniel E. Ho, Thomas Icard, Dan Jurafsky, James Zou
Causal Interventions on Causal Paths: Mapping GPT-2's Reasoning From Syntax to Semantics
Isabelle Lee, Joshua Lum, Ziyi Liu, Dani Yogatama
Graph Linearization Methods for Reasoning on Graphs with Large Language Models
Christos Xypolopoulos, Guokan Shang, Xiao Fei, Giannis Nikolentzos, Hadi Abdine, Iakovos Evdaimon, Michail Chatzianastasis, Giorgos Stamou, Michalis Vazirgiannis
Not All Heads Matter: A Head-Level KV Cache Compression Method with Integrated Retrieval and Reasoning
Yu Fu, Zefan Cai, Abedelkadir Asi, Wayne Xiong, Yue Dong, Wen Xiao
From Blind Solvers to Logical Thinkers: Benchmarking LLMs' Logical Integrity on Faulty Mathematical Problems
A M Muntasir Rahman, Junyi Ye, Wei Yao, Wenpeng Yin, Guiling Wang
SIKeD: Self-guided Iterative Knowledge Distillation for mathematical reasoning
Shivam Adarsh, Kumar Shridhar, Caglar Gulcehre, Nicholas Monath, Mrinmaya Sachan
Decoding on Graphs: Faithful and Sound Reasoning on Knowledge Graphs through Generation of Well-Formed Chains
Kun Li, Tianhua Zhang, Xixin Wu, Hongyin Luo, James Glass, Helen Meng
Geometric Feature Enhanced Knowledge Graph Embedding and Spatial Reasoning
Lei Hu, Wenwen Li, Yunqiang Zhu
R-CoT: Reverse Chain-of-Thought Problem Generation for Geometric Reasoning in Large Multimodal Models
Linger Deng, Yuliang Liu, Bohan Li, Dongliang Luo, Liang Wu, Chengquan Zhang, Pengyuan Lyu, Ziyang Zhang, Gang Zhang, Errui Ding, Yingying Zhu, Xiang Bai
CLR-Bench: Evaluating Large Language Models in College-level Reasoning
Junnan Dong, Zijin Hong, Yuanchen Bei, Feiran Huang, Xinrun Wang, Xiao Huang
In Context Learning and Reasoning for Symbolic Regression with Large Language Models
Samiha Sharlin, Tyler R. Josephson
Language Model Non-myopic Generation for Reasoning and Planning
Chang Ma, Haiteng Zhao, Junlei Zhang, Junxian He, Lingpeng Kong
Improving Causal Reasoning in Large Language Models: A Survey
Longxuan Yu, Delin Chen, Siheng Xiong, Qingyang Wu, Qingzhen Liu, Dawei Li, Zhikai Chen, Xiaoze Liu, Liangming Pan
A Theoretical Understanding of Chain-of-Thought: Coherent Reasoning and Error-Aware Demonstration
Yingqian Cui, Pengfei He, Xianfeng Tang, Qi He, Chen Luo, Jiliang Tang, Yue Xing
Rulebreakers Challenge: Revealing a Blind Spot in Large Language Models' Reasoning with Formal Logic
Jason Chan, Robert Gaizauskas, Zhixue Zhao
ToW: Thoughts of Words Improve Reasoning in Large Language Models
Zhikun Xu, Ming Shen, Jacob Dineen, Zhaonan Li, Xiao Ye, Shijie Lu, Aswin RRV, Chitta Baral, Ben Zhou
SMART: Self-learning Meta-strategy Agent for Reasoning Tasks
Rongxing Liu, Kumar Shridhar, Manish Prajapat, Patrick Xia, Mrinmaya Sachan