Complex Reasoning
Complex reasoning in artificial intelligence focuses on developing models capable of multi-step, logical inference and problem-solving, mirroring human cognitive abilities. Current research emphasizes improving large language models (LLMs) through techniques like chain-of-thought prompting, retrieval-augmented generation (RAG), and the integration of symbolic reasoning with neural networks, often incorporating multi-modal data (e.g., visual and textual information). These advancements are significant for enhancing the reliability and applicability of AI systems across diverse fields, including autonomous driving, robotics, and scientific discovery, by enabling more robust and accurate decision-making in complex scenarios.
Papers
When a language model is optimized for reasoning, does it still show embers of autoregression? An analysis of OpenAI o1
R. Thomas McCoy, Shunyu Yao, Dan Friedman, Mathew D. Hardy, Thomas L. Griffiths
Why context matters in VQA and Reasoning: Semantic interventions for VLM input modalities
Kenza Amara, Lukas Klein, Carsten Lüth, Paul Jäger, Hendrik Strobelt, Mennatallah El-Assady
Can We Further Elicit Reasoning in LLMs? Critic-Guided Planning with Retrieval-Augmentation for Solving Challenging Tasks
Xingxuan Li, Weiwen Xu, Ruochen Zhao, Fangkai Jiao, Shafiq Joty, Lidong Bing
RATIONALYST: Pre-training Process-Supervision for Improving Reasoning
Dongwei Jiang, Guoxuan Wang, Yining Lu, Andrew Wang, Jingyu Zhang, Chuyu Liu, Benjamin Van Durme, Daniel Khashabi
AHA: A Vision-Language-Model for Detecting and Reasoning Over Failures in Robotic Manipulation
Jiafei Duan, Wilbert Pumacay, Nishanth Kumar, Yi Ru Wang, Shulin Tian, Wentao Yuan, Ranjay Krishna, Dieter Fox, Ajay Mandlekar, Yijie Guo
DualAD: Dual-Layer Planning for Reasoning in Autonomous Driving
Dingrui Wang, Marc Kaufeld, Johannes Betz
Evaluating Multilingual Long-Context Models for Retrieval and Reasoning
Ameeta Agrawal, Andy Dang, Sina Bagheri Nezhad, Rhitabrat Pokharel, Russell Scheinberg
Logic-of-Thought: Injecting Logic into Contexts for Full Reasoning in Large Language Models
Tongxuan Liu, Wenjiang Xu, Weizhe Huang, Xingyu Wang, Jiaxing Wang, Hailong Yang, Jing Li
HDFlow: Enhancing LLM Complex Problem-Solving with Hybrid Thinking and Dynamic Workflows
Wenlin Yao, Haitao Mi, Dong Yu
Enhancing Temporal Sensitivity and Reasoning for Time-Sensitive Question Answering
Wanqi Yang, Yanda Li, Meng Fang, Ling Chen
Judgment of Thoughts: Courtroom of the Binary Logical Reasoning in Large Language Models
Sungjune Park, Daeseon Choi
Uncovering Latent Chain of Thought Vectors in Language Models
Jason Zhang, Scott Viteri
StateAct: State Tracking and Reasoning for Acting and Planning with Large Language Models
Nikolai Rozanov, Marek Rei
BrainDreamer: Reasoning-Coherent and Controllable Image Generation from EEG Brain Signals via Language Guidance
Ling Wang, Chen Wu, Lin Wang