Multi Hop Reasoning
Multi-hop reasoning focuses on enabling AI systems, particularly large language models (LLMs), to solve problems requiring multiple inferential steps by integrating information from various sources. Current research emphasizes improving the accuracy and robustness of these systems, addressing challenges like susceptibility to misleading information and the need for efficient reasoning pathways, often using techniques like chain-of-thought prompting, reinforcement learning, and knowledge graph integration. This area is crucial for advancing AI capabilities in complex tasks such as question answering, knowledge base completion, and decision-making, impacting both scientific understanding of reasoning and the development of more reliable and powerful AI applications.
Papers
Dialogue Chain-of-Thought Distillation for Commonsense-aware Conversational Agents
Hyungjoo Chae, Yongho Song, Kai Tzu-iunn Ong, Taeyoon Kwon, Minjin Kim, Youngjae Yu, Dongha Lee, Dongyeop Kang, Jinyoung Yeo
Learning To Teach Large Language Models Logical Reasoning
Meiqi Chen, Yubo Ma, Kaitao Song, Yixin Cao, Yan Zhang, Dongsheng Li