Scene Representation
Scene representation in computer vision and robotics aims to create effective digital models of real-world environments for tasks like robot navigation, visual localization, and scene understanding. Current research focuses on developing efficient and accurate scene representations using various approaches, including neural radiance fields (NeRFs), Gaussian splatting, and graph neural networks, often incorporating semantic information and leveraging large language models for improved scene interpretation and interaction. These advancements are crucial for improving the capabilities of autonomous systems and enabling more sophisticated applications in robotics, augmented reality, and other fields requiring robust scene understanding.
Papers
A Survey on Monocular Re-Localization: From the Perspective of Scene Map Representation
Jinyu Miao, Kun Jiang, Tuopu Wen, Yunlong Wang, Peijing Jia, Xuhe Zhao, Qian Cheng, Zhongyang Xiao, Jin Huang, Zhihua Zhong, Diange Yang
CaesarNeRF: Calibrated Semantic Representation for Few-shot Generalizable Neural Rendering
Haidong Zhu, Tianyu Ding, Tianyi Chen, Ilya Zharkov, Ram Nevatia, Luming Liang