Free View Synthesis
Free-view synthesis aims to generate realistic images from arbitrary viewpoints of a scene, overcoming the limitations of traditional methods restricted to captured perspectives. Current research heavily utilizes neural radiance fields (NeRFs) and 3D Gaussian splatting, often incorporating techniques like parametric 3D models and motion prediction to handle dynamic scenes, including human figures and complex environments. These advancements are improving the quality and efficiency of novel view generation, with applications ranging from virtual and augmented reality to animation and digital content creation. The ability to reconstruct and synthesize complex scenes from limited input data is a key focus, driving progress in both model architecture and data acquisition strategies.
Papers
EmoTalk3D: High-Fidelity Free-View Synthesis of Emotional 3D Talking Head
Qianyun He, Xinya Ji, Yicheng Gong, Yuanxun Lu, Zhengyu Diao, Linjia Huang, Yao Yao, Siyu Zhu, Zhan Ma, Songcen Xu, Xiaofei Wu, Zixiao Zhang, Xun Cao, Hao Zhu
Head360: Learning a Parametric 3D Full-Head for Free-View Synthesis in 360{\deg}
Yuxiao He, Yiyu Zhuang, Yanwen Wang, Yao Yao, Siyu Zhu, Xiaoyu Li, Qi Zhang, Xun Cao, Hao Zhu