Novel View Synthesis
Novel view synthesis (NVS) aims to generate realistic images from viewpoints not directly captured, reconstructing 3D scenes from 2D data. Current research heavily utilizes implicit neural representations, such as neural radiance fields (NeRFs) and 3D Gaussian splatting, focusing on improving efficiency, handling sparse or noisy input data (including single-view scenarios), and enhancing the realism of synthesized views, particularly for complex scenes with dynamic elements or challenging lighting conditions. These advancements have significant implications for various fields, including robotics, cultural heritage preservation, and virtual/augmented reality applications, by enabling more accurate 3D modeling and more immersive experiences.
Papers
CombiNeRF: A Combination of Regularization Techniques for Few-Shot Neural Radiance Field View Synthesis
Matteo Bonotto, Luigi Sarrocco, Daniele Evangelista, Marco Imperoli, Alberto Pretto
Leveraging Thermal Modality to Enhance Reconstruction in Low-Light Conditions
Jiacong Xu, Mingqian Liao, K Ram Prabhakar, Vishal M. Patel
Ctrl123: Consistent Novel View Synthesis via Closed-Loop Transcription
Hongxiang Zhao, Xili Dai, Jianan Wang, Shengbang Tong, Jingyuan Zhang, Weida Wang, Lei Zhang, Yi Ma
MSI-NeRF: Linking Omni-Depth with View Synthesis through Multi-Sphere Image aided Generalizable Neural Radiance Field
Dongyu Yan, Guanyu Huang, Fengyu Quan, Haoyao Chen