Novel View Synthesis
Novel view synthesis (NVS) aims to generate realistic images from viewpoints not directly captured, reconstructing 3D scenes from 2D data. Current research heavily utilizes implicit neural representations, such as neural radiance fields (NeRFs) and 3D Gaussian splatting, focusing on improving efficiency, handling sparse or noisy input data (including single-view scenarios), and enhancing the realism of synthesized views, particularly for complex scenes with dynamic elements or challenging lighting conditions. These advancements have significant implications for various fields, including robotics, cultural heritage preservation, and virtual/augmented reality applications, by enabling more accurate 3D modeling and more immersive experiences.
Papers
Neural Pixel Composition: 3D-4D View Synthesis from Multi-Views
Aayush Bansal, Michael Zollhoefer
Generalizable Patch-Based Neural Rendering
Mohammed Suhail, Carlos Esteves, Leonid Sigal, Ameesh Makadia
AdaNeRF: Adaptive Sampling for Real-time Rendering of Neural Radiance Fields
Andreas Kurz, Thomas Neff, Zhaoyang Lv, Michael Zollhöfer, Markus Steinberger
EyeNeRF: A Hybrid Representation for Photorealistic Synthesis, Animation and Relighting of Human Eyes
Gengyan Li, Abhimitra Meka, Franziska Müller, Marcel C. Bühler, Otmar Hilliges, Thabo Beeler
Virtual Correspondence: Humans as a Cue for Extreme-View Geometry
Wei-Chiu Ma, Anqi Joyce Yang, Shenlong Wang, Raquel Urtasun, Antonio Torralba
FWD: Real-time Novel View Synthesis with Forward Warping and Depth
Ang Cao, Chris Rockwell, Justin Johnson