View Synthesis
View synthesis aims to generate realistic images of a scene from novel viewpoints, not present in the input data. Current research heavily focuses on improving the speed and quality of view synthesis using methods like 3D Gaussian splatting and neural radiance fields, often incorporating techniques like multi-view stereo and diffusion models to enhance accuracy and handle sparse or inconsistent input data. These advancements are significant for applications such as augmented and virtual reality, robotics, and 3D modeling, enabling more realistic and efficient rendering of complex scenes. The field is actively exploring ways to improve generalization to unseen scenes and objects, particularly for challenging scenarios like low-light conditions or sparse input views.
Papers
Multi-Level Neural Scene Graphs for Dynamic Urban Environments
Tobias Fischer, Lorenzo Porzi, Samuel Rota Bulò, Marc Pollefeys, Peter Kontschieder
Stable Surface Regularization for Fast Few-Shot NeRF
Byeongin Joung, Byeong-Uk Lee, Jaesung Choe, Ukcheol Shin, Minjun Kang, Taeyeop Lee, In So Kweon, Kuk-Jin Yoon
XScale-NVS: Cross-Scale Novel View Synthesis with Hash Featurized Manifold
Guangyu Wang, Jinzhi Zhang, Fan Wang, Ruqi Huang, Lu Fang
CoherentGS: Sparse Novel View Synthesis with Coherent 3D Gaussians
Avinash Paliwal, Wei Ye, Jinhui Xiong, Dmytro Kotovenko, Rakesh Ranjan, Vikas Chandra, Nima Khademi Kalantari