Open World
Open-world research focuses on developing AI systems capable of operating in unpredictable, dynamic environments with unknown objects and situations, unlike traditional closed-world systems with predefined constraints. Current research emphasizes robust generalization and zero-shot capabilities, often employing vision-language models (VLMs), large language models (LLMs), and novel algorithms like contrastive learning and self-supervised learning to handle unseen data and concepts. This work is crucial for advancing AI's real-world applicability, particularly in robotics, autonomous driving, and other safety-critical domains requiring adaptability and resilience to unexpected events.
Papers
EgoLifter: Open-world 3D Segmentation for Egocentric Perception
Qiao Gu, Zhaoyang Lv, Duncan Frost, Simon Green, Julian Straub, Chris Sweeney
AIDE: An Automatic Data Engine for Object Detection in Autonomous Driving
Mingfu Liang, Jong-Chyi Su, Samuel Schulter, Sparsh Garg, Shiyu Zhao, Ying Wu, Manmohan Chandraker