Scene Representation
Scene representation in computer vision and robotics aims to create effective digital models of real-world environments for tasks like robot navigation, visual localization, and scene understanding. Current research focuses on developing efficient and accurate scene representations using various approaches, including neural radiance fields (NeRFs), Gaussian splatting, and graph neural networks, often incorporating semantic information and leveraging large language models for improved scene interpretation and interaction. These advancements are crucial for improving the capabilities of autonomous systems and enabling more sophisticated applications in robotics, augmented reality, and other fields requiring robust scene understanding.
Papers
Reinforcement Learning with Generalizable Gaussian Splatting
Jiaxu Wang, Qiang Zhang, Jingkai Sun, Jiahang Cao, Yecheng Shao, Renjing Xu
DVN-SLAM: Dynamic Visual Neural SLAM Based on Local-Global Encoding
Wenhua Wu, Guangming Wang, Ting Deng, Sebastian Aegidius, Stuart Shanks, Valerio Modugno, Dimitrios Kanoulas, Hesheng Wang