View Synthesis
View synthesis aims to generate realistic images of a scene from novel viewpoints, not present in the input data. Current research heavily focuses on improving the speed and quality of view synthesis using methods like 3D Gaussian splatting and neural radiance fields, often incorporating techniques like multi-view stereo and diffusion models to enhance accuracy and handle sparse or inconsistent input data. These advancements are significant for applications such as augmented and virtual reality, robotics, and 3D modeling, enabling more realistic and efficient rendering of complex scenes. The field is actively exploring ways to improve generalization to unseen scenes and objects, particularly for challenging scenarios like low-light conditions or sparse input views.
Papers
ReShader: View-Dependent Highlights for Single Image View-Synthesis
Avinash Paliwal, Brandon Nguyen, Andrii Tsarov, Nima Khademi Kalantari
SideGAN: 3D-Aware Generative Model for Improved Side-View Image Synthesis
Kyungmin Jo, Wonjoon Jin, Jaegul Choo, Hyunjoon Lee, Sunghyun Cho
Anti-Aliased Neural Implicit Surfaces with Encoding Level of Detail
Yiyu Zhuang, Qi Zhang, Ying Feng, Hao Zhu, Yao Yao, Xiaoyu Li, Yan-Pei Cao, Ying Shan, Xun Cao
Enhancing NeRF akin to Enhancing LLMs: Generalizable NeRF Transformer with Mixture-of-View-Experts
Wenyan Cong, Hanxue Liang, Peihao Wang, Zhiwen Fan, Tianlong Chen, Mukund Varma, Yi Wang, Zhangyang Wang
IT3D: Improved Text-to-3D Generation with Explicit View Synthesis
Yiwen Chen, Chi Zhang, Xiaofeng Yang, Zhongang Cai, Gang Yu, Lei Yang, Guosheng Lin
Novel-view Synthesis and Pose Estimation for Hand-Object Interaction from Sparse Views
Wentian Qu, Zhaopeng Cui, Yinda Zhang, Chenyu Meng, Cuixia Ma, Xiaoming Deng, Hongan Wang
Efficient View Synthesis with Neural Radiance Distribution Field
Yushuang Wu, Xiao Li, Jinglu Wang, Xiaoguang Han, Shuguang Cui, Yan Lu