Open World
Open-world research focuses on developing AI systems capable of operating in unpredictable, dynamic environments with unknown objects and situations, unlike traditional closed-world systems with predefined constraints. Current research emphasizes robust generalization and zero-shot capabilities, often employing vision-language models (VLMs), large language models (LLMs), and novel algorithms like contrastive learning and self-supervised learning to handle unseen data and concepts. This work is crucial for advancing AI's real-world applicability, particularly in robotics, autonomous driving, and other safety-critical domains requiring adaptability and resilience to unexpected events.
Papers
Boosting Open-Domain Continual Learning via Leveraging Intra-domain Category-aware Prototype
Yadong Lu, Shitian Zhao, Boxiang Yun, Dongsheng Jiang, Yin Li, Qingli Li, Yan Wang
Towards Few-Shot Learning in the Open World: A Review and Beyond
Hui Xue, Yuexuan An, Yongchun Qin, Wenqian Li, Yixin Wu, Yongjuan Che, Pengfei Fang, Minling Zhang
On the Foundations of Conflict-Driven Solving for Hybrid MKNF Knowledge Bases
Riley Kinahan, Spencer Killen, Kevin Wan, Jia-Huai You
Towards Open-World Object-based Anomaly Detection via Self-Supervised Outlier Synthesis
Brian K. S. Isaac-Medina, Yona Falinie A. Gaus, Neelanjan Bhowmik, Toby P. Breckon
Odyssey: Empowering Minecraft Agents with Open-World Skills
Shunyu Liu, Yaoru Li, Kongcheng Zhang, Zhenyu Cui, Wenkai Fang, Yuxuan Zheng, Tongya Zheng, Mingli Song