View Synthesis
View synthesis aims to generate realistic images of a scene from novel viewpoints, not present in the input data. Current research heavily focuses on improving the speed and quality of view synthesis using methods like 3D Gaussian splatting and neural radiance fields, often incorporating techniques like multi-view stereo and diffusion models to enhance accuracy and handle sparse or inconsistent input data. These advancements are significant for applications such as augmented and virtual reality, robotics, and 3D modeling, enabling more realistic and efficient rendering of complex scenes. The field is actively exploring ways to improve generalization to unseen scenes and objects, particularly for challenging scenarios like low-light conditions or sparse input views.
Papers
Control-NeRF: Editable Feature Volumes for Scene Rendering and Manipulation
Verica Lazova, Vladimir Guzov, Kyle Olszewski, Sergey Tulyakov, Gerard Pons-Moll
Learning Dynamic View Synthesis With Few RGBD Cameras
Shengze Wang, YoungJoong Kwon, Yuan Shen, Qian Zhang, Andrei State, Jia-Bin Huang, Henry Fuchs
NeuSample: Neural Sample Field for Efficient View Synthesis
Jiemin Fang, Lingxi Xie, Xinggang Wang, Xiaopeng Zhang, Wenyu Liu, Qi Tian
Hallucinated Neural Radiance Fields in the Wild
Xingyu Chen, Qi Zhang, Xiaoyu Li, Yue Chen, Ying Feng, Xuan Wang, Jue Wang
NeRFReN: Neural Radiance Fields with Reflections
Yuan-Chen Guo, Di Kang, Linchao Bao, Yu He, Song-Hai Zhang