View Synthesis
View synthesis aims to generate realistic images of a scene from novel viewpoints, not present in the input data. Current research heavily focuses on improving the speed and quality of view synthesis using methods like 3D Gaussian splatting and neural radiance fields, often incorporating techniques like multi-view stereo and diffusion models to enhance accuracy and handle sparse or inconsistent input data. These advancements are significant for applications such as augmented and virtual reality, robotics, and 3D modeling, enabling more realistic and efficient rendering of complex scenes. The field is actively exploring ways to improve generalization to unseen scenes and objects, particularly for challenging scenarios like low-light conditions or sparse input views.
Papers
LVSM: A Large View Synthesis Model with Minimal 3D Inductive Bias
Haian Jin, Hanwen Jiang, Hao Tan, Kai Zhang, Sai Bi, Tianyuan Zhang, Fujun Luan, Noah Snavely, Zexiang Xu
VistaDream: Sampling multiview consistent images for single-view scene reconstruction
Haiping Wang, Yuan Liu, Ziwei Liu, Wenping Wang, Zhen Dong, Bisheng Yang
EVER: Exact Volumetric Ellipsoid Rendering for Real-time View Synthesis
Alexander Mai, Peter Hedman, George Kopanas, Dor Verbin, David Futschik, Qiangeng Xu, Falko Kuester, Jonathan T. Barron, Yinda Zhang
EVA-Gaussian: 3D Gaussian-based Real-time Human Novel View Synthesis under Diverse Camera Settings
Yingdong Hu, Zhening Liu, Jiawei Shao, Zehong Lin, Jun Zhang