Neural Radiance Field
Neural Radiance Fields (NeRFs) are a powerful technique for creating realistic 3D scene representations from 2D images, aiming to reconstruct both geometry and appearance. Current research focuses on improving efficiency and robustness, exploring variations like Gaussian splatting for faster rendering and adapting NeRFs for diverse data modalities (LiDAR, infrared, ultrasound) and challenging conditions (low light, sparse views). This technology has significant implications for various fields, including autonomous driving, robotics, medical imaging, and virtual/augmented reality, by enabling high-fidelity 3D scene modeling and novel view synthesis from limited input data.
Papers
Loc-NeRF: Monte Carlo Localization using Neural Radiance Fields
Dominic Maggio, Marcus Abate, Jingnan Shi, Courtney Mario, Luca Carlone
NeRF-SOS: Any-View Self-supervised Object Segmentation on Complex Scenes
Zhiwen Fan, Peihao Wang, Yifan Jiang, Xinyu Gong, Dejia Xu, Zhangyang Wang
Density-aware NeRF Ensembles: Quantifying Predictive Uncertainty in Neural Radiance Fields
Niko Sünderhauf, Jad Abou-Chakra, Dimity Miller
ActiveNeRF: Learning where to See with Uncertainty Estimation
Xuran Pan, Zihang Lai, Shiji Song, Gao Huang
LATITUDE: Robotic Global Localization with Truncated Dynamic Low-pass Filter in City-scale NeRF
Zhenxin Zhu, Yuantao Chen, Zirui Wu, Chao Hou, Yongliang Shi, Chuxuan Li, Pengfei Li, Hao Zhao, Guyue Zhou