Neural Radiance Field
Neural Radiance Fields (NeRFs) are a powerful technique for creating realistic 3D scene representations from 2D images, aiming to reconstruct both geometry and appearance. Current research focuses on improving efficiency and robustness, exploring variations like Gaussian splatting for faster rendering and adapting NeRFs for diverse data modalities (LiDAR, infrared, ultrasound) and challenging conditions (low light, sparse views). This technology has significant implications for various fields, including autonomous driving, robotics, medical imaging, and virtual/augmented reality, by enabling high-fidelity 3D scene modeling and novel view synthesis from limited input data.
Papers
Fast High Dynamic Range Radiance Fields for Dynamic Scenes
Guanjun Wu, Taoran Yi, Jiemin Fang, Wenyu Liu, Xinggang Wang
TriNeRFLet: A Wavelet Based Multiscale Triplane NeRF Representation
Rajaei Khatib, Raja Giryes
GO-NeRF: Generating Objects in Neural Radiance Fields for Virtual Reality Content Creation
Peng Dai, Feitong Tan, Xin Yu, Yifan Peng, Yinda Zhang, Xiaojuan Qi
Diffusion Priors for Dynamic View Synthesis from Monocular Videos
Chaoyang Wang, Peiye Zhuang, Aliaksandr Siarohin, Junli Cao, Guocheng Qian, Hsin-Ying Lee, Sergey Tulyakov
FPRF: Feed-Forward Photorealistic Style Transfer of Large-Scale 3D Neural Radiance Fields
GeonU Kim, Kim Youwang, Tae-Hyun Oh
CTNeRF: Cross-Time Transformer for Dynamic Neural Radiance Field from Monocular Video
Xingyu Miao, Yang Bai, Haoran Duan, Yawen Huang, Fan Wan, Yang Long, Yefeng Zheng