Complex Reasoning
Complex reasoning in artificial intelligence focuses on developing models capable of multi-step, logical inference and problem-solving, mirroring human cognitive abilities. Current research emphasizes improving large language models (LLMs) through techniques like chain-of-thought prompting, retrieval-augmented generation (RAG), and the integration of symbolic reasoning with neural networks, often incorporating multi-modal data (e.g., visual and textual information). These advancements are significant for enhancing the reliability and applicability of AI systems across diverse fields, including autonomous driving, robotics, and scientific discovery, by enabling more robust and accurate decision-making in complex scenarios.
Papers
Mitigating Tail Narrowing in LLM Self-Improvement via Socratic-Guided Sampling
Yiwen Ding, Zhiheng Xi, Wei He, Zhuoyuan Li, Yitao Zhai, Xiaowei Shi, Xunliang Cai, Tao Gui, Qi Zhang, Xuanjing Huang
Towards Multi-Source Retrieval-Augmented Generation via Synergizing Reasoning and Preference-Driven Retrieval
Qingfei Zhao, Ruobing Wang, Xin Wang, Daren Zha, Nan Mu
A little less conversation, a little more action, please: Investigating the physical common-sense of LLMs in a 3D embodied environment
Matteo G. Mecattaf, Ben Slater, Marko Tešić, Jonathan Prunty, Konstantinos Voudouris, Lucy G. Cheke
Vision-Language Models Can Self-Improve Reasoning via Reflection
Kanzhi Cheng, Yantao Li, Fangzhi Xu, Jianbing Zhang, Hao Zhou, Yang Liu
Eliciting Critical Reasoning in Retrieval-Augmented Language Models via Contrastive Explanations
Leonardo Ranaldi, Marco Valentino, Andrè Freitas
RealCQA-V2 : Visual Premise Proving
Saleem Ahmed, Rangaraj Setlur, Venu Govindaraju
Flow-DPO: Improving LLM Mathematical Reasoning through Online Multi-Agent Learning
Yihe Deng, Paul Mineiro
Diffusion as Reasoning: Enhancing Object Goal Navigation with LLM-Biased Diffusion Model
Yiming Ji, Yang Liu, Zhengpu Wang, Boyu Ma, Zongwu Xie, Hong Liu
Can Large Language Models Act as Symbolic Reasoners?
Rob Sullivan, Nelly Elsayed
Belief in the Machine: Investigating Epistemological Blind Spots of Language Models
Mirac Suzgun, Tayfun Gur, Federico Bianchi, Daniel E. Ho, Thomas Icard, Dan Jurafsky, James Zou
Causal Interventions on Causal Paths: Mapping GPT-2's Reasoning From Syntax to Semantics
Isabelle Lee, Joshua Lum, Ziyi Liu, Dani Yogatama
Graph Linearization Methods for Reasoning on Graphs with Large Language Models
Christos Xypolopoulos, Guokan Shang, Xiao Fei, Giannis Nikolentzos, Hadi Abdine, Iakovos Evdaimon, Michail Chatzianastasis, Giorgos Stamou, Michalis Vazirgiannis
Not All Heads Matter: A Head-Level KV Cache Compression Method with Integrated Retrieval and Reasoning
Yu Fu, Zefan Cai, Abedelkadir Asi, Wayne Xiong, Yue Dong, Wen Xiao
From Blind Solvers to Logical Thinkers: Benchmarking LLMs' Logical Integrity on Faulty Mathematical Problems
A M Muntasir Rahman, Junyi Ye, Wei Yao, Wenpeng Yin, Guiling Wang
SIKeD: Self-guided Iterative Knowledge Distillation for mathematical reasoning
Shivam Adarsh, Kumar Shridhar, Caglar Gulcehre, Nicholas Monath, Mrinmaya Sachan
Decoding on Graphs: Faithful and Sound Reasoning on Knowledge Graphs through Generation of Well-Formed Chains
Kun Li, Tianhua Zhang, Xixin Wu, Hongyin Luo, James Glass, Helen Meng
Geometric Feature Enhanced Knowledge Graph Embedding and Spatial Reasoning
Lei Hu, Wenwen Li, Yunqiang Zhu