Complex Reasoning
Complex reasoning in artificial intelligence focuses on developing models capable of multi-step, logical inference and problem-solving, mirroring human cognitive abilities. Current research emphasizes improving large language models (LLMs) through techniques like chain-of-thought prompting, retrieval-augmented generation (RAG), and the integration of symbolic reasoning with neural networks, often incorporating multi-modal data (e.g., visual and textual information). These advancements are significant for enhancing the reliability and applicability of AI systems across diverse fields, including autonomous driving, robotics, and scientific discovery, by enabling more robust and accurate decision-making in complex scenarios.
Papers
SBI-RAG: Enhancing Math Word Problem Solving for Students through Schema-Based Instruction and Retrieval-Augmented Generation
Prakhar Dixit, Tim Oates
Proof Flow: Preliminary Study on Generative Flow Network Language Model Tuning for Formal Reasoning
Matthew Ho, Vincent Zhu, Xiaoyin Chen, Moksh Jain, Nikolay Malkin, Edwin Zhang
When Not to Answer: Evaluating Prompts on GPT Models for Effective Abstention in Unanswerable Math Word Problems
Asir Saadat, Tasmia Binte Sogir, Md Taukir Azam Chowdhury, Syem Aziz
Learning Representations for Reasoning: Generalizing Across Diverse Structures
Zhaocheng Zhu
PRefLexOR: Preference-based Recursive Language Modeling for Exploratory Optimization of Reasoning and Agentic Thinking
Markus J. Buehler
A Prompt-Based Knowledge Graph Foundation Model for Universal In-Context Reasoning
Yuanning Cui, Zequn Sun, Wei Hu
OmnixR: Evaluating Omni-modality Language Models on Reasoning across Modalities
Lichang Chen, Hexiang Hu, Mingda Zhang, Yiwen Chen, Zifeng Wang, Yandong Li, Pranav Shyam, Tianyi Zhou, Heng Huang, Ming-Hsuan Yang, Boqing Gong
Navigation under uncertainty: Trajectory prediction and occlusion reasoning with switching dynamical systems
Ran Wei, Joseph Lee, Shohei Wakayama, Alexander Tschantz, Conor Heins, Christopher Buckley, John Carenbauer, Hari Thiruvengada, Mahault Albarracin, Miguel de Prado, Petter Horling, Peter Winzell, Renjith Rajagopal
Thinking LLMs: General Instruction Following with Thought Generation
Tianhao Wu, Janice Lan, Weizhe Yuan, Jiantao Jiao, Jason Weston, Sainbayar Sukhbaatar
QUITE: Quantifying Uncertainty in Natural Language Text in Bayesian Reasoning Scenarios
Timo Pierre Schrader, Lukas Lange, Simon Razniewski, Annemarie Friedrich
CoMAT: Chain of Mathematically Annotated Thought Improves Mathematical Reasoning
Joshua Ong Jun Leang, Aryo Pradipta Gema, Shay B. Cohen
OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models
Jun Wang, Meng Fang, Ziyu Wan, Muning Wen, Jiachen Zhu, Anjie Liu, Ziqin Gong, Yan Song, Lei Chen, Lionel M. Ni, Linyi Yang, Ying Wen, Weinan Zhang
Transformer-based Language Models for Reasoning in the Description Logic ALCQ
Angelos Poulis, Eleni Tsalapati, Manolis Koubarakis
CAMPHOR: Collaborative Agents for Multi-input Planning and High-Order Reasoning On Device
Yicheng Fu, Raviteja Anantha, Jianpeng Cheng