NeRF SLAM
NeRF SLAM integrates Neural Radiance Fields (NeRFs), which represent 3D scenes as neural networks, with Simultaneous Localization and Mapping (SLAM) techniques to create accurate 3D models from images or videos with potentially noisy or sparse camera poses. Current research focuses on improving robustness to challenging conditions like motion blur, dynamic scenes, and sparse data, often employing techniques like Kalman filtering for motion estimation, invertible neural networks for efficient deformation modeling, and feature tracking for global consistency. This approach holds significant promise for applications requiring accurate 3D scene reconstruction from limited or imperfect visual data, such as autonomous navigation, augmented reality, and 3D modeling for various fields.
Papers
NeRF-MAE: Masked AutoEncoders for Self-Supervised 3D Representation Learning for Neural Radiance Fields
Muhammad Zubair Irshad, Sergey Zakharov, Vitor Guizilini, Adrien Gaidon, Zsolt Kira, Rares Ambrus
Marrying NeRF with Feature Matching for One-step Pose Estimation
Ronghan Chen, Yang Cong, Yu Ren
DiSR-NeRF: Diffusion-Guided View-Consistent Super-Resolution NeRF
Jie Long Lee, Chen Li, Gim Hee Lee
NeRF as a Non-Distant Environment Emitter in Physics-based Inverse Rendering
Jingwang Ling, Ruihan Yu, Feng Xu, Chun Du, Shuang Zhao
OV-NeRF: Open-vocabulary Neural Radiance Fields with Vision and Language Foundation Models for 3D Semantic Understanding
Guibiao Liao, Kaichen Zhou, Zhenyu Bao, Kanglin Liu, Qing Li