Neural Radiance Field
Neural Radiance Fields (NeRFs) are a powerful technique for creating realistic 3D scene representations from 2D images, aiming to reconstruct both geometry and appearance. Current research focuses on improving efficiency and robustness, exploring variations like Gaussian splatting for faster rendering and adapting NeRFs for diverse data modalities (LiDAR, infrared, ultrasound) and challenging conditions (low light, sparse views). This technology has significant implications for various fields, including autonomous driving, robotics, medical imaging, and virtual/augmented reality, by enabling high-fidelity 3D scene modeling and novel view synthesis from limited input data.
Papers
LSE-NeRF: Learning Sensor Modeling Errors for Deblured Neural Radiance Fields with RGB-Event Stereo
Wei Zhi Tang, Daniel Rebain, Kostantinos G. Derpanis, Kwang Moo Yi
G-NeLF: Memory- and Data-Efficient Hybrid Neural Light Field for Novel View Synthesis
Lutao Jiang, Lin Wang
Neural Surface Reconstruction and Rendering for LiDAR-Visual Systems
Jianheng Liu, Chunran Zheng, Yunfei Wan, Bowen Wang, Yixi Cai, Fu Zhang
$R^2$-Mesh: Reinforcement Learning Powered Mesh Reconstruction via Geometry and Appearance Refinement
Haoyang Wang, Liming Liu, Quanlu Jia, Jiangkai Wu, Haodan Zhang, Peiheng Wang, Xinggong Zhang
DiscoNeRF: Class-Agnostic Object Field for 3D Object Discovery
Corentin Dumery, Aoxiang Fan, Ren Li, Nicolas Talabot, Pascal Fua