Open World
Open-world research focuses on developing AI systems capable of operating in unpredictable, dynamic environments with unknown objects and situations, unlike traditional closed-world systems with predefined constraints. Current research emphasizes robust generalization and zero-shot capabilities, often employing vision-language models (VLMs), large language models (LLMs), and novel algorithms like contrastive learning and self-supervised learning to handle unseen data and concepts. This work is crucial for advancing AI's real-world applicability, particularly in robotics, autonomous driving, and other safety-critical domains requiring adaptability and resilience to unexpected events.
Papers
Towards Open-World Mobile Manipulation in Homes: Lessons from the Neurips 2023 HomeRobot Open Vocabulary Mobile Manipulation Challenge
Sriram Yenamandra, Arun Ramachandran, Mukul Khanna, Karmesh Yadav, Jay Vakil, Andrew Melnik, Michael Büttner, Leon Harz, Lyon Brown, Gora Chand Nandi, Arjun PS, Gaurav Kumar Yadav, Rahul Kala, Robert Haschke, Yang Luo, Jinxin Zhu, Yansen Han, Bingyi Lu, Xuan Gu, Qinyuan Liu, Yaping Zhao, Qiting Ye, Chenxiao Dou, Yansong Chua, Volodymyr Kuzma, Vladyslav Humennyy, Ruslan Partsey, Jonathan Francis, Devendra Singh Chaplot, Gunjan Chhablani, Alexander Clegg, Theophile Gervet, Vidhi Jain, Ram Ramrakhya, Andrew Szot, Austin Wang, Tsung-Yen Yang, Aaron Edsinger, Charlie Kemp, Binit Shah, Zsolt Kira, Dhruv Batra, Roozbeh Mottaghi, Yonatan Bisk, Chris Paxton
CEIA: CLIP-Based Event-Image Alignment for Open-World Event-Based Understanding
Wenhao Xu, Wenming Weng, Yueyi Zhang, Zhiwei Xiong