Commonsense Reasoning
Commonsense reasoning, the ability of AI systems to understand and apply everyday knowledge, is a crucial area of research aiming to bridge the gap between human and artificial intelligence. Current research focuses on integrating large language models (LLMs) with other modalities like vision and tactile data, often using techniques like instruction tuning, multimodal learning, and knowledge graph integration to improve performance on various benchmarks. This work is significant because enhanced commonsense reasoning is essential for building more robust, reliable, and explainable AI systems across diverse applications, including robotics, deepfake detection, and conversational AI.
Papers
Exploring the Reliability of Foundation Model-Based Frontier Selection in Zero-Shot Object Goal Navigation
Shuaihang Yuan, Halil Utku Unlu, Hao Huang, Congcong Wen, Anthony Tzes, Yi Fang
Improving Generalization in Visual Reasoning via Self-Ensemble
Tien-Huy Nguyen, Quang-Khai Tran, Anh-Tuan Quang-Hoang
LINKED: Eliciting, Filtering and Integrating Knowledge in Large Language Model for Commonsense Reasoning
Jiachun Li, Pengfei Cao, Chenhao Wang, Zhuoran Jin, Yubo Chen, Kang Liu, Xiaojian Jiang, Jiexin Xu, Jun Zhao
Zero-shot Commonsense Reasoning over Machine Imagination
Hyuntae Park, Yeachan Kim, Jun-Hyung Park, SangKeun Lee (Korea University)