Neural Radiance Field
Neural Radiance Fields (NeRFs) are a powerful technique for creating realistic 3D scene representations from 2D images, aiming to reconstruct both geometry and appearance. Current research focuses on improving efficiency and robustness, exploring variations like Gaussian splatting for faster rendering and adapting NeRFs for diverse data modalities (LiDAR, infrared, ultrasound) and challenging conditions (low light, sparse views). This technology has significant implications for various fields, including autonomous driving, robotics, medical imaging, and virtual/augmented reality, by enabling high-fidelity 3D scene modeling and novel view synthesis from limited input data.
Papers
NeRFuser: Large-Scale Scene Representation by NeRF Fusion
Jiading Fang, Shengjie Lin, Igor Vasiljevic, Vitor Guizilini, Rares Ambrus, Adrien Gaidon, Gregory Shakhnarovich, Matthew R. Walter
Registering Neural Radiance Fields as 3D Density Images
Han Jiang, Ruoxuan Li, Haosen Sun, Yu-Wing Tai, Chi-Keung Tang
MultiPlaneNeRF: Neural Radiance Field with Non-Trainable Representation
Dominik Zimny, Artur Kasymov, Adam Kania, Jacek Tabor, Maciej Zięba, Przemysław Spurek
OR-NeRF: Object Removing from 3D Scenes Guided by Multiview Segmentation with Neural Radiance Fields
Youtan Yin, Zhoujie Fu, Fan Yang, Guosheng Lin