Novel View Synthesis
Novel view synthesis (NVS) aims to generate realistic images from viewpoints not directly captured, reconstructing 3D scenes from 2D data. Current research heavily utilizes implicit neural representations, such as neural radiance fields (NeRFs) and 3D Gaussian splatting, focusing on improving efficiency, handling sparse or noisy input data (including single-view scenarios), and enhancing the realism of synthesized views, particularly for complex scenes with dynamic elements or challenging lighting conditions. These advancements have significant implications for various fields, including robotics, cultural heritage preservation, and virtual/augmented reality applications, by enabling more accurate 3D modeling and more immersive experiences.
Papers
MapGS: Generalizable Pretraining and Data Augmentation for Online Mapping via Novel View Synthesis
Hengyuan Zhang, David Paz, Yuliang Guo, Xinyu Huang, Henrik I. Christensen, Liu Ren
Aug3D: Augmenting large scale outdoor datasets for Generalizable Novel View Synthesis
Aditya Rauniyar, Omar Alama, Silong Yong, Katia Sycara, Sebastian Scherer
EnvGS: Modeling View-Dependent Appearance with Environment Gaussian
Tao Xie, Xi Chen, Zhen Xu, Yiman Xie, Yudong Jin, Yujun Shen, Sida Peng, Hujun Bao, Xiaowei Zhou
Drive-1-to-3: Enriching Diffusion Priors for Novel View Synthesis of Real Vehicles
Chuang Lin, Bingbing Zhuang, Shanlin Sun, Ziyu Jiang, Jianfei Cai, Manmohan Chandraker
LiftRefine: Progressively Refined View Synthesis from 3D Lifting with Volume-Triplane Representations
Tung Do, Thuan Hoang Nguyen, Anh Tuan Tran, Rang Nguyen, Binh-Son Hua
Real-Time Position-Aware View Synthesis from Single-View Input
Manu Gond, Emin Zerman, Sebastian Knorr, Mårten Sjöström
Turbo-GS: Accelerating 3D Gaussian Fitting for High-Quality Radiance Fields
Tao Lu, Ankit Dhiman, R Srinath, Emre Arslan, Angela Xing, Yuanbo Xiangli, R Venkatesh Babu, Srinath Sridhar
Probabilistic Inverse Cameras: Image to 3D via Multiview Geometry
Rishabh Kabra, Drew A. Hudson, Sjoerd van Steenkiste, Joao Carreira, Niloy J. Mitra
TSGaussian: Semantic and Depth-Guided Target-Specific Gaussian Splatting from Sparse Views
Liang Zhao, Zehan Bao, Yi Xie, Hong Chen, Yaohui Chen, Weifu Li
From an Image to a Scene: Learning to Imagine the World from a Million 360 Videos
Matthew Wallingford, Anand Bhattad, Aditya Kusupati, Vivek Ramanujan, Matt Deitke, Sham Kakade, Aniruddha Kembhavi, Roozbeh Mottaghi, Wei-Chiu Ma, Ali Farhadi
Faster and Better 3D Splatting via Group Training
Chengbo Wang, Guozheng Ma, Yifei Xue, Yizhen Lao