Neural Radiance Field
Neural Radiance Fields (NeRFs) are a powerful technique for creating realistic 3D scene representations from 2D images, aiming to reconstruct both geometry and appearance. Current research focuses on improving efficiency and robustness, exploring variations like Gaussian splatting for faster rendering and adapting NeRFs for diverse data modalities (LiDAR, infrared, ultrasound) and challenging conditions (low light, sparse views). This technology has significant implications for various fields, including autonomous driving, robotics, medical imaging, and virtual/augmented reality, by enabling high-fidelity 3D scene modeling and novel view synthesis from limited input data.
Papers
SPARF: Neural Radiance Fields from Sparse and Noisy Poses
Prune Truong, Marie-Julie Rakotosaona, Fabian Manhardt, Federico Tombari
ESLAM: Efficient Dense SLAM System Based on Hybrid Representation of Signed Distance Fields
Mohammad Mahdi Johari, Camilla Carta, François Fleuret
Shape, Pose, and Appearance from a Single Image via Bootstrapped Radiance Field Inversion
Dario Pavllo, David Joseph Tan, Marie-Julie Rakotosaona, Federico Tombari
Local-to-Global Registration for Bundle-Adjusting Neural Radiance Fields
Yue Chen, Xingyu Chen, Xuan Wang, Qi Zhang, Yu Guo, Ying Shan, Fei Wang
SegNeRF: 3D Part Segmentation with Neural Radiance Fields
Jesus Zarzar, Sara Rojas, Silvio Giancola, Bernard Ghanem
FLNeRF: 3D Facial Landmarks Estimation in Neural Radiance Fields
Hao Zhang, Tianyuan Dai, Yu-Wing Tai, Chi-Keung Tang