Complex Reasoning
Complex reasoning in artificial intelligence focuses on developing models capable of multi-step, logical inference and problem-solving, mirroring human cognitive abilities. Current research emphasizes improving large language models (LLMs) through techniques like chain-of-thought prompting, retrieval-augmented generation (RAG), and the integration of symbolic reasoning with neural networks, often incorporating multi-modal data (e.g., visual and textual information). These advancements are significant for enhancing the reliability and applicability of AI systems across diverse fields, including autonomous driving, robotics, and scientific discovery, by enabling more robust and accurate decision-making in complex scenarios.
Papers
Take a Step Back: Evoking Reasoning via Abstraction in Large Language Models
Huaixiu Steven Zheng, Swaroop Mishra, Xinyun Chen, Heng-Tze Cheng, Ed H. Chi, Quoc V Le, Denny Zhou
Put Your Money Where Your Mouth Is: Evaluating Strategic Planning and Execution of LLM Agents in an Auction Arena
Jiangjie Chen, Siyu Yuan, Rong Ye, Bodhisattwa Prasad Majumder, Kyle Richardson
Parrot Mind: Towards Explaining the Complex Task Reasoning of Pretrained Large Language Models with Template-Content Structure
Haotong Yang, Fanxu Meng, Zhouchen Lin, Muhan Zhang
Ada-Instruct: Adapting Instruction Generators for Complex Reasoning
Wanyun Cui, Qianle Wang
Analysis of the Reasoning with Redundant Information Provided Ability of Large Language Models
Wenbei Xie
Thought Propagation: An Analogical Approach to Complex Reasoning with Large Language Models
Junchi Yu, Ran He, Rex Ying
LOGICSEG: Parsing Visual Semantics with Neural Logic Learning and Reasoning
Liulei Li, Wenguan Wang, Yi Yang
Natural Language based Context Modeling and Reasoning for Ubiquitous Computing with Large Language Models: A Tutorial
Haoyi Xiong, Jiang Bian, Sijia Yang, Xiaofei Zhang, Linghe Kong, Daqing Zhang