Novel View Synthesis
Novel view synthesis (NVS) aims to generate realistic images from viewpoints not directly captured, reconstructing 3D scenes from 2D data. Current research heavily utilizes implicit neural representations, such as neural radiance fields (NeRFs) and 3D Gaussian splatting, focusing on improving efficiency, handling sparse or noisy input data (including single-view scenarios), and enhancing the realism of synthesized views, particularly for complex scenes with dynamic elements or challenging lighting conditions. These advancements have significant implications for various fields, including robotics, cultural heritage preservation, and virtual/augmented reality applications, by enabling more accurate 3D modeling and more immersive experiences.
Papers
TensoIR: Tensorial Inverse Rendering
Haian Jin, Isabella Liu, Peijia Xu, Xiaoshuai Zhang, Songfang Han, Sai Bi, Xiaowei Zhou, Zexiang Xu, Hao Su
Total-Recon: Deformable Scene Reconstruction for Embodied View Synthesis
Chonghyuk Song, Gengshan Yang, Kangle Deng, Jun-Yan Zhu, Deva Ramanan
Explicit Correspondence Matching for Generalizable Neural Radiance Fields
Yuedong Chen, Haofei Xu, Qianyi Wu, Chuanxia Zheng, Tat-Jen Cham, Jianfei Cai
Gen-NeRF: Efficient and Generalizable Neural Radiance Fields via Algorithm-Hardware Co-Design
Yonggan Fu, Zhifan Ye, Jiayi Yuan, Shunyao Zhang, Sixu Li, Haoran You, Yingyan Lin
Nerfbusters: Removing Ghostly Artifacts from Casually Captured NeRFs
Frederik Warburg, Ethan Weber, Matthew Tancik, Aleksander Holynski, Angjoo Kanazawa
ReLight My NeRF: A Dataset for Novel View Synthesis and Relighting of Real World Objects
Marco Toschi, Riccardo De Matteo, Riccardo Spezialetti, Daniele De Gregorio, Luigi Di Stefano, Samuele Salti