Complex Reasoning
Complex reasoning in artificial intelligence focuses on developing models capable of multi-step, logical inference and problem-solving, mirroring human cognitive abilities. Current research emphasizes improving large language models (LLMs) through techniques like chain-of-thought prompting, retrieval-augmented generation (RAG), and the integration of symbolic reasoning with neural networks, often incorporating multi-modal data (e.g., visual and textual information). These advancements are significant for enhancing the reliability and applicability of AI systems across diverse fields, including autonomous driving, robotics, and scientific discovery, by enabling more robust and accurate decision-making in complex scenarios.
Papers
Toward Self-Improvement of LLMs via Imagination, Searching, and Criticizing
Ye Tian, Baolin Peng, Linfeng Song, Lifeng Jin, Dian Yu, Haitao Mi, Dong Yu
RAGAR, Your Falsehood Radar: RAG-Augmented Reasoning for Political Fact-Checking using Multimodal Large Language Models
M. Abdul Khaliq, P. Chang, M. Ma, B. Pflugfelder, F. Miletić
RAR-b: Reasoning as Retrieval Benchmark
Chenghao Xiao, G Thomas Hudson, Noura Al Moubayed
A Survey of Reasoning for Substitution Relationships: Definitions, Methods, and Directions
Anxin Yang, Zhijuan Du, Tao Sun
THOUGHTSCULPT: Reasoning with Intermediate Revision and Search
Yizhou Chi, Kevin Yang, Dan Klein
AI Knowledge and Reasoning: Emulating Expert Creativity in Scientific Research
Anirban Mukherjee, Hannah Hanwen Chang
Cleared for Takeoff? Compositional & Conditional Reasoning may be the Achilles Heel to (Flight-Booking) Language Agents
Harsh Kohli, Huan Sun
Can only LLMs do Reasoning?: Potential of Small Language Models in Task Planning
Gawon Choi, Hyemin Ahn