Neural Scene Representation
Neural scene representation aims to create computationally efficient and versatile 3D models of scenes from 2D images or other sensor data, enabling novel view synthesis and other downstream tasks. Current research focuses on improving the accuracy, speed, and scalability of these representations, exploring architectures like neural radiance fields (NeRFs), Gaussian splatting, and point-based methods, often incorporating depth information and addressing challenges like handling dynamic scenes and large-scale environments. These advancements have significant implications for various fields, including robotics, autonomous driving, virtual and augmented reality, and 3D modeling, by providing more realistic and efficient ways to represent and interact with 3D environments.
Papers
PLGSLAM: Progressive Neural Scene Represenation with Local to Global Bundle Adjustment
Tianchen Deng, Guole Shen, Tong Qin, Jianyu Wang, Wentao Zhao, Jingchuan Wang, Danwei Wang, Weidong Chen
RANRAC: Robust Neural Scene Representations via Random Ray Consensus
Benno Buschmann, Andreea Dogaru, Elmar Eisemann, Michael Weinmann, Bernhard Egger