NeRF SLAM
NeRF SLAM integrates Neural Radiance Fields (NeRFs), which represent 3D scenes as neural networks, with Simultaneous Localization and Mapping (SLAM) techniques to create accurate 3D models from images or videos with potentially noisy or sparse camera poses. Current research focuses on improving robustness to challenging conditions like motion blur, dynamic scenes, and sparse data, often employing techniques like Kalman filtering for motion estimation, invertible neural networks for efficient deformation modeling, and feature tracking for global consistency. This approach holds significant promise for applications requiring accurate 3D scene reconstruction from limited or imperfect visual data, such as autonomous navigation, augmented reality, and 3D modeling for various fields.
Papers
Zero NeRF: Registration with Zero Overlap
Casey Peat, Oliver Batchelor, Richard Green, James Atlas
SPIn-NeRF: Multiview Segmentation and Perceptual Inpainting with Neural Radiance Fields
Ashkan Mirzaei, Tristan Aumentado-Armstrong, Konstantinos G. Derpanis, Jonathan Kelly, Marcus A. Brubaker, Igor Gilitschenski, Alex Levinshtein