View Synthesis
View synthesis aims to generate realistic images of a scene from novel viewpoints, not present in the input data. Current research heavily focuses on improving the speed and quality of view synthesis using methods like 3D Gaussian splatting and neural radiance fields, often incorporating techniques like multi-view stereo and diffusion models to enhance accuracy and handle sparse or inconsistent input data. These advancements are significant for applications such as augmented and virtual reality, robotics, and 3D modeling, enabling more realistic and efficient rendering of complex scenes. The field is actively exploring ways to improve generalization to unseen scenes and objects, particularly for challenging scenarios like low-light conditions or sparse input views.
Papers
Generalizable Human Gaussians for Sparse View Synthesis
Youngjoong Kwon, Baole Fang, Yixing Lu, Haoye Dong, Cheng Zhang, Francisco Vicente Carrasco, Albert Mosella-Montoro, Jianjin Xu, Shingo Takagi, Daeil Kim, Aayush Prakash, Fernando De la Torre
Splatfacto-W: A Nerfstudio Implementation of Gaussian Splatting for Unconstrained Photo Collections
Congrong Xu, Justin Kerr, Angjoo Kanazawa
Lite2Relight: 3D-aware Single Image Portrait Relighting
Pramod Rao, Gereon Fox, Abhimitra Meka, Mallikarjun B R, Fangneng Zhan, Tim Weyrich, Bernd Bickel, Hanspeter Pfister, Wojciech Matusik, Mohamed Elgharib, Christian Theobalt
NGP-RT: Fusing Multi-Level Hash Features with Lightweight Attention for Real-Time Novel View Synthesis
Yubin Hu, Xiaoyang Guo, Yang Xiao, Jingwei Huang, Yong-Jin Liu