Open World
Open-world research focuses on developing AI systems capable of operating in unpredictable, dynamic environments with unknown objects and situations, unlike traditional closed-world systems with predefined constraints. Current research emphasizes robust generalization and zero-shot capabilities, often employing vision-language models (VLMs), large language models (LLMs), and novel algorithms like contrastive learning and self-supervised learning to handle unseen data and concepts. This work is crucial for advancing AI's real-world applicability, particularly in robotics, autonomous driving, and other safety-critical domains requiring adaptability and resilience to unexpected events.
Papers
DINO-X: A Unified Vision Model for Open-World Object Detection and Understanding
Tianhe Ren, Yihao Chen, Qing Jiang, Zhaoyang Zeng, Yuda Xiong, Wenlong Liu, Zhengyu Ma, Junyi Shen, Yuan Gao, Xiaoke Jiang, Xingyu Chen, Zhuheng Song, Yuhong Zhang, Hongjie Huang, Han Gao, Shilong Liu, Hao Zhang, Feng Li, Kent Yu, Lei Zhang
Single-Model Attribution for Spoofed Speech via Vocoder Fingerprints in an Open-World Setting
Matías Pizarro, Mike Laszkiewicz, Dorothea Kolossa, Asja Fischer
Automated 3D Physical Simulation of Open-world Scene with Gaussian Splatting
Haoyu Zhao, Hao Wang, Xingyue Zhao, Hongqiu Wang, Zhiyu Wu, Chengjiang Long, Hua Zou
Do LLMs Understand Ambiguity in Text? A Case Study in Open-world Question Answering
Aryan Keluskar, Amrita Bhattacharjee, Huan Liu
SNN-Based Online Learning of Concepts and Action Laws in an Open World
Christel Grimaud (IRIT-LILaC), Dominique Longin (IRIT-LILaC), Andreas Herzig (IRIT-LILaC)
UrbanDiT: A Foundation Model for Open-World Urban Spatio-Temporal Learning
Yuan Yuan, Chonghua Han, Jingtao Ding, Depeng Jin, Yong Li