Open World
Open-world research focuses on developing AI systems capable of operating in unpredictable, dynamic environments with unknown objects and situations, unlike traditional closed-world systems with predefined constraints. Current research emphasizes robust generalization and zero-shot capabilities, often employing vision-language models (VLMs), large language models (LLMs), and novel algorithms like contrastive learning and self-supervised learning to handle unseen data and concepts. This work is crucial for advancing AI's real-world applicability, particularly in robotics, autonomous driving, and other safety-critical domains requiring adaptability and resilience to unexpected events.
Papers
Video Instance Segmentation in an Open-World
Omkar Thawakar, Sanath Narayan, Hisham Cholakkal, Rao Muhammad Anwer, Salman Khan, Jorma Laaksonen, Mubarak Shah, Fahad Shahbaz Khan
RegionPLC: Regional Point-Language Contrastive Learning for Open-World 3D Scene Understanding
Jihan Yang, Runyu Ding, Weipeng Deng, Zhe Wang, Xiaojuan Qi
Detecting Everything in the Open World: Towards Universal Object Detection
Zhenyu Wang, Yali Li, Xi Chen, Ser-Nam Lim, Antonio Torralba, Hengshuang Zhao, Shengjin Wang
Detecting the open-world objects with the help of the Brain
Shuailei Ma, Yuefeng Wang, Ying Wei, Peihao Chen, Zhixiang Ye, Jiaqi Fan, Enming Zhang, Thomas H. Li
Open World Classification with Adaptive Negative Samples
Ke Bai, Guoyin Wang, Jiwei Li, Sunghyun Park, Sungjin Lee, Puyang Xu, Ricardo Henao, Lawrence Carin
Open-world Instance Segmentation: Top-down Learning with Bottom-up Supervision
Tarun Kalluri, Weiyao Wang, Heng Wang, Manmohan Chandraker, Lorenzo Torresani, Du Tran