Unseen Environment
Research on "unseen environments" focuses on enabling robots and AI systems to operate effectively in situations not encountered during training. Current efforts concentrate on developing robust perception and navigation methods using techniques like diffusion models, topological data analysis, and large language models integrated with visual-language models, often incorporating continual learning and self-supervised learning strategies to improve generalization. This research is crucial for advancing autonomous systems in diverse real-world applications, such as robotics, augmented reality, and autonomous driving, by improving their adaptability and reliability in unpredictable settings.
Papers
Does Pre-trained Language Model Actually Infer Unseen Links in Knowledge Graph Completion?
Yusuke Sakai, Hidetaka Kamigaito, Katsuhiko Hayashi, Taro Watanabe
Imagine the Unseen World: A Benchmark for Systematic Generalization in Visual World Models
Yeongbin Kim, Gautam Singh, Junyeong Park, Caglar Gulcehre, Sungjin Ahn
What Is Near?: Room Locality Learning for Enhanced Robot Vision-Language-Navigation in Indoor Living Environments
Muraleekrishna Gopinathan, Jumana Abu-Khalaf, David Suter, Sidike Paheding, Nathir A. Rawashdeh
SC-NeRF: Self-Correcting Neural Radiance Field with Sparse Views
Liang Song, Guangming Wang, Jiuming Liu, Zhenyang Fu, Yanzi Miao, Hesheng