Neural Radiance Field
Neural Radiance Fields (NeRFs) are a powerful technique for creating realistic 3D scene representations from 2D images, aiming to reconstruct both geometry and appearance. Current research focuses on improving efficiency and robustness, exploring variations like Gaussian splatting for faster rendering and adapting NeRFs for diverse data modalities (LiDAR, infrared, ultrasound) and challenging conditions (low light, sparse views). This technology has significant implications for various fields, including autonomous driving, robotics, medical imaging, and virtual/augmented reality, by enabling high-fidelity 3D scene modeling and novel view synthesis from limited input data.
Papers
SG-NeRF: Neural Surface Reconstruction with Scene Graph Optimization
Yiyang Chen, Siyan Dong, Xulong Wang, Lulu Cai, Youyi Zheng, Yanchao Yang
Invertible Neural Warp for NeRF
Shin-Fang Chng, Ravi Garg, Hemanth Saratchandran, Simon Lucey
Splatfacto-W: A Nerfstudio Implementation of Gaussian Splatting for Unconstrained Photo Collections
Congrong Xu, Justin Kerr, Angjoo Kanazawa
Evaluating geometric accuracy of NeRF reconstructions compared to SLAM method
Adam Korycki, Colleen Josephson, Steve McGuire
AirNeRF: 3D Reconstruction of Human with Drone and NeRF for Future Communication Systems
Alexey Kotcov, Maria Dronova, Vladislav Cheremnykh, Sausar Karaf, Dzmitry Tsetserukou
Domain Generalization for 6D Pose Estimation Through NeRF-based Image Synthesis
Antoine Legrand, Renaud Detry, Christophe De Vleeschouwer
IE-NeRF: Inpainting Enhanced Neural Radiance Fields in the Wild
Shuaixian Wang, Haoran Xu, Yaokun Li, Jiwei Chen, Guang Tan