Neural Radiance Field
Neural Radiance Fields (NeRFs) are a powerful technique for creating realistic 3D scene representations from 2D images, aiming to reconstruct both geometry and appearance. Current research focuses on improving efficiency and robustness, exploring variations like Gaussian splatting for faster rendering and adapting NeRFs for diverse data modalities (LiDAR, infrared, ultrasound) and challenging conditions (low light, sparse views). This technology has significant implications for various fields, including autonomous driving, robotics, medical imaging, and virtual/augmented reality, by enabling high-fidelity 3D scene modeling and novel view synthesis from limited input data.
Papers
Conditional-Flow NeRF: Accurate 3D Modelling with Reliable Uncertainty Quantification
Jianxiong Shen, Antonio Agudo, Francesc Moreno-Noguer, Adria Ruiz
ViewFormer: NeRF-free Neural Rendering from Few Images Using Transformers
Jonáš Kulhánek, Erik Derner, Torsten Sattler, Robert Babuška
Enhancement of Novel View Synthesis Using Omnidirectional Image Completion
Takayuki Hara, Tatsuya Harada
NeRF-Supervision: Learning Dense Object Descriptors from Neural Radiance Fields
Lin Yen-Chen, Pete Florence, Jonathan T. Barron, Tsung-Yi Lin, Alberto Rodriguez, Phillip Isola
NeuroFluid: Fluid Dynamics Grounding with Particle-Driven Neural Radiance Fields
Shanyan Guan, Huayu Deng, Yunbo Wang, Xiaokang Yang
Block-NeRF: Scalable Large Scene Neural View Synthesis
Matthew Tancik, Vincent Casser, Xinchen Yan, Sabeek Pradhan, Ben Mildenhall, Pratul P. Srinivasan, Jonathan T. Barron, Henrik Kretzschmar
PVSeRF: Joint Pixel-, Voxel- and Surface-Aligned Radiance Field for Single-Image Novel View Synthesis
Xianggang Yu, Jiapeng Tang, Yipeng Qin, Chenghong Li, Linchao Bao, Xiaoguang Han, Shuguang Cui