Complex Reasoning
Complex reasoning in artificial intelligence focuses on developing models capable of multi-step, logical inference and problem-solving, mirroring human cognitive abilities. Current research emphasizes improving large language models (LLMs) through techniques like chain-of-thought prompting, retrieval-augmented generation (RAG), and the integration of symbolic reasoning with neural networks, often incorporating multi-modal data (e.g., visual and textual information). These advancements are significant for enhancing the reliability and applicability of AI systems across diverse fields, including autonomous driving, robotics, and scientific discovery, by enabling more robust and accurate decision-making in complex scenarios.
Papers
On the Hardness of Faithful Chain-of-Thought Reasoning in Large Language Models
Sree Harsha Tanneru, Dan Ley, Chirag Agarwal, Himabindu Lakkaraju
Reasoning or Simply Next Token Prediction? A Benchmark for Stress-Testing Large Language Models
Wentian Wang, Paul Kantor, Jacob Feldman, Lazaros Gallos, Hao Wang
ReMI: A Dataset for Reasoning with Multiple Images
Mehran Kazemi, Nishanth Dikkala, Ankit Anand, Petar Devic, Ishita Dasgupta, Fangyu Liu, Bahare Fatemi, Pranjal Awasthi, Dee Guo, Sreenivas Gollapudi, Ahmed Qureshi
Leveraging Explicit Reasoning for Inference Integration in Commonsense-Augmented Dialogue Models
Sarah E. Finch, Jinho D. Choi
Chain of Preference Optimization: Improving Chain-of-Thought Reasoning in LLMs
Xuan Zhang, Chao Du, Tianyu Pang, Qian Liu, Wei Gao, Min Lin