Complex Reasoning
Complex reasoning in artificial intelligence focuses on developing models capable of multi-step, logical inference and problem-solving, mirroring human cognitive abilities. Current research emphasizes improving large language models (LLMs) through techniques like chain-of-thought prompting, retrieval-augmented generation (RAG), and the integration of symbolic reasoning with neural networks, often incorporating multi-modal data (e.g., visual and textual information). These advancements are significant for enhancing the reliability and applicability of AI systems across diverse fields, including autonomous driving, robotics, and scientific discovery, by enabling more robust and accurate decision-making in complex scenarios.
Papers
Chain-of-Thought Hub: A Continuous Effort to Measure Large Language Models' Reasoning Performance
Yao Fu, Litu Ou, Mingyu Chen, Yuhao Wan, Hao Peng, Tushar Khot
NavGPT: Explicit Reasoning in Vision-and-Language Navigation with Large Language Models
Gengze Zhou, Yicong Hong, Qi Wu
Beyond Chain-of-Thought, Effective Graph-of-Thought Reasoning in Language Models
Yao Yao, Zuchao Li, Hai Zhao
SPRING: Studying the Paper and Reasoning to Play Games
Yue Wu, Shrimai Prabhumoye, So Yeon Min, Yonatan Bisk, Ruslan Salakhutdinov, Amos Azaria, Tom Mitchell, Yuanzhi Li
Reasoning with Language Model is Planning with World Model
Shibo Hao, Yi Gu, Haodi Ma, Joshua Jiahua Hong, Zhen Wang, Daisy Zhe Wang, Zhiting Hu
Machine Reading Comprehension using Case-based Reasoning
Dung Thai, Dhruv Agarwal, Mudit Chaudhary, Wenlong Zhao, Rajarshi Das, Manzil Zaheer, Jay-Yoon Lee, Hannaneh Hajishirzi, Andrew McCallum
UniChart: A Universal Vision-language Pretrained Model for Chart Comprehension and Reasoning
Ahmed Masry, Parsa Kavehzadeh, Xuan Long Do, Enamul Hoque, Shafiq Joty
PEARL: Prompting Large Language Models to Plan and Execute Actions Over Long Documents
Simeng Sun, Yang Liu, Shuohang Wang, Chenguang Zhu, Mohit Iyyer
Self-Polish: Enhance Reasoning in Large Language Models via Problem Refinement
Zhiheng Xi, Senjie Jin, Yuhao Zhou, Rui Zheng, Songyang Gao, Tao Gui, Qi Zhang, Xuanjing Huang
Image Manipulation via Multi-Hop Instructions -- A New Dataset and Weakly-Supervised Neuro-Symbolic Approach
Harman Singh, Poorva Garg, Mohit Gupta, Kevin Shah, Ashish Goswami, Satyam Modi, Arnab Kumar Mondal, Dinesh Khandelwal, Dinesh Garg, Parag Singla
Improving Factuality and Reasoning in Language Models through Multiagent Debate
Yilun Du, Shuang Li, Antonio Torralba, Joshua B. Tenenbaum, Igor Mordatch
ChatCoT: Tool-Augmented Chain-of-Thought Reasoning on Chat-based Large Language Models
Zhipeng Chen, Kun Zhou, Beichen Zhang, Zheng Gong, Wayne Xin Zhao, Ji-Rong Wen