Portrait Generation
Portrait generation research focuses on creating realistic and controllable synthetic portraits, often from limited input like a single image or text description. Current efforts leverage generative adversarial networks (GANs), diffusion models, and neural radiance fields (NeRFs), often incorporating 3D modeling for improved realism and view consistency, along with multimodal prompts (text, images) for enhanced control. This field is significant for applications in video editing, virtual reality, and personalized content creation, driving advancements in image synthesis and 3D modeling techniques.
Papers
ReliTalk: Relightable Talking Portrait Generation from a Single Video
Haonan Qiu, Zhaoxi Chen, Yuming Jiang, Hang Zhou, Xiangyu Fan, Lei Yang, Wayne Wu, Ziwei Liu
AniPortraitGAN: Animatable 3D Portrait Generation from 2D Image Collections
Yue Wu, Sicheng Xu, Jianfeng Xiang, Fangyun Wei, Qifeng Chen, Jiaolong Yang, Xin Tong