Neural Radiance Field
Neural Radiance Fields (NeRFs) are a powerful technique for creating realistic 3D scene representations from 2D images, aiming to reconstruct both geometry and appearance. Current research focuses on improving efficiency and robustness, exploring variations like Gaussian splatting for faster rendering and adapting NeRFs for diverse data modalities (LiDAR, infrared, ultrasound) and challenging conditions (low light, sparse views). This technology has significant implications for various fields, including autonomous driving, robotics, medical imaging, and virtual/augmented reality, by enabling high-fidelity 3D scene modeling and novel view synthesis from limited input data.
Papers
BRDF-NeRF: Neural Radiance Fields with Optical Satellite Images and BRDF Modelling
Lulin Zhang, Ewelina Rupnik, Tri Dung Nguyen, Stéphane Jacquemoud, Yann Klinger
Intraoperative Registration by Cross-Modal Inverse Neural Rendering
Maximilian Fehrentz, Mohammad Farid Azampour, Reuben Dorent, Hassan Rasheed, Colin Galvin, Alexandra Golby, William M. Wells, Sarah Frisken, Nassir Navab, Nazim Haouchine
LSE-NeRF: Learning Sensor Modeling Errors for Deblured Neural Radiance Fields with RGB-Event Stereo
Wei Zhi Tang, Daniel Rebain, Kostantinos G. Derpanis, Kwang Moo Yi
G-NeLF: Memory- and Data-Efficient Hybrid Neural Light Field for Novel View Synthesis
Lutao Jiang, Lin Wang
Neural Surface Reconstruction and Rendering for LiDAR-Visual Systems
Jianheng Liu, Chunran Zheng, Yunfei Wan, Bowen Wang, Yixi Cai, Fu Zhang