Complex Reasoning
Complex reasoning in artificial intelligence focuses on developing models capable of multi-step, logical inference and problem-solving, mirroring human cognitive abilities. Current research emphasizes improving large language models (LLMs) through techniques like chain-of-thought prompting, retrieval-augmented generation (RAG), and the integration of symbolic reasoning with neural networks, often incorporating multi-modal data (e.g., visual and textual information). These advancements are significant for enhancing the reliability and applicability of AI systems across diverse fields, including autonomous driving, robotics, and scientific discovery, by enabling more robust and accurate decision-making in complex scenarios.
Papers
Thinking LLMs: General Instruction Following with Thought Generation
Tianhao Wu, Janice Lan, Weizhe Yuan, Jiantao Jiao, Jason Weston, Sainbayar Sukhbaatar
QUITE: Quantifying Uncertainty in Natural Language Text in Bayesian Reasoning Scenarios
Timo Pierre Schrader, Lukas Lange, Simon Razniewski, Annemarie Friedrich
CoMAT: Chain of Mathematically Annotated Thought Improves Mathematical Reasoning
Joshua Ong Jun Leang, Aryo Pradipta Gema, Shay B. Cohen
OpenR: An Open Source Framework for Advanced Reasoning with Large Language Models
Jun Wang, Meng Fang, Ziyu Wan, Muning Wen, Jiachen Zhu, Anjie Liu, Ziqin Gong, Yan Song, Lei Chen, Lionel M. Ni, Linyi Yang, Ying Wen, Weinan Zhang
Transformer-based Language Models for Reasoning in the Description Logic ALCQ
Angelos Poulis, Eleni Tsalapati, Manolis Koubarakis
CAMPHOR: Collaborative Agents for Multi-input Planning and High-Order Reasoning On Device
Yicheng Fu, Raviteja Anantha, Jianpeng Cheng
Diversity of Thought Elicits Stronger Reasoning Capabilities in Multi-Agent Debate Frameworks
Mahmood Hegazy
Agents Thinking Fast and Slow: A Talker-Reasoner Architecture
Konstantina Christakopoulou, Shibl Mourad, Maja Matarić
Teaching-Inspired Integrated Prompting Framework: A Novel Approach for Enhancing Reasoning in Large Language Models
Wenting Tan, Dongxiao Chen, Jieting Xue, Zihao Wang, Taijie Chen
Neural Networks Decoded: Targeted and Robust Analysis of Neural Network Decisions via Causal Explanations and Reasoning
Alec F. Diallo, Vaishak Belle, Paul Patras
Proceedings of the First International Workshop on Next-Generation Language Models for Knowledge Representation and Reasoning (NeLaMKRR 2024)
Ken Satoh, Ha-Thanh Nguyen, Francesca Toni, Randy Goebel, Kostas Stathis