Novel View Synthesis
Novel view synthesis (NVS) aims to generate realistic images from viewpoints not directly captured, reconstructing 3D scenes from 2D data. Current research heavily utilizes implicit neural representations, such as neural radiance fields (NeRFs) and 3D Gaussian splatting, focusing on improving efficiency, handling sparse or noisy input data (including single-view scenarios), and enhancing the realism of synthesized views, particularly for complex scenes with dynamic elements or challenging lighting conditions. These advancements have significant implications for various fields, including robotics, cultural heritage preservation, and virtual/augmented reality applications, by enabling more accurate 3D modeling and more immersive experiences.
Papers
Flash3D: Feed-Forward Generalisable 3D Scene Reconstruction from a Single Image
Stanislaw Szymanowicz, Eldar Insafutdinov, Chuanxia Zheng, Dylan Campbell, João F. Henriques, Christian Rupprecht, Andrea Vedaldi
Gear-NeRF: Free-Viewpoint Rendering and Tracking with Motion-aware Spatio-Temporal Sampling
Xinhang Liu, Yu-Wing Tai, Chi-Keung Tang, Pedro Miraldo, Suhas Lohit, Moitreya Chatterjee
IReNe: Instant Recoloring of Neural Radiance Fields
Alessio Mazzucchelli, Adrian Garcia-Garcia, Elena Garces, Fernando Rivas-Manzaneque, Francesc Moreno-Noguer, Adrian Penate-Sanchez
Uncertainty-guided Optimal Transport in Depth Supervised Sparse-View 3D Gaussian
Wei Sun, Qi Zhang, Yanzhao Zhou, Qixiang Ye, Jianbin Jiao, Yuan Li
Neural Radiance Fields for Novel View Synthesis in Monocular Gastroscopy
Zijie Jiang, Yusuke Monno, Masatoshi Okutomi, Sho Suzuki, Kenji Miki
LP-3DGS: Learning to Prune 3D Gaussian Splatting
Zhaoliang Zhang, Tianchen Song, Yongjae Lee, Li Yang, Cheng Peng, Rama Chellappa, Deliang Fan
Zero-to-Hero: Enhancing Zero-Shot Novel View Synthesis via Attention Map Filtering
Ido Sobol, Chenfeng Xu, Or Litany