Novel View Synthesis
Novel view synthesis (NVS) aims to generate realistic images from viewpoints not directly captured, reconstructing 3D scenes from 2D data. Current research heavily utilizes implicit neural representations, such as neural radiance fields (NeRFs) and 3D Gaussian splatting, focusing on improving efficiency, handling sparse or noisy input data (including single-view scenarios), and enhancing the realism of synthesized views, particularly for complex scenes with dynamic elements or challenging lighting conditions. These advancements have significant implications for various fields, including robotics, cultural heritage preservation, and virtual/augmented reality applications, by enabling more accurate 3D modeling and more immersive experiences.
Papers
MomentsNeRF: Leveraging Orthogonal Moments for Few-Shot Neural Rendering
Ahmad AlMughrabi, Ricardo Marques, Petia Radeva
AutoSplat: Constrained Gaussian Splatting for Autonomous Driving Scene Reconstruction
Mustafa Khan, Hamidreza Fazlali, Dhruv Sharma, Tongtong Cao, Dongfeng Bai, Yuan Ren, Bingbing Liu
Reducing the Memory Footprint of 3D Gaussian Splatting
Panagiotis Papantonakis, Georgios Kopanas, Bernhard Kerbl, Alexandre Lanvin, George Drettakis
Crowd-Sourced NeRF: Collecting Data from Production Vehicles for 3D Street View Reconstruction
Tong Qin, Changze Li, Haoyang Ye, Shaowei Wan, Minzhen Li, Hongwei Liu, Ming Yang