Identity Preservation
Identity preservation in image and video generation focuses on creating realistic synthetic media that accurately maintains the identity of a reference subject, even when undergoing significant alterations like pose changes, expression modifications, or style transfers. Current research heavily utilizes diffusion models, often incorporating techniques like low-rank adaptation, classifier guidance, and various attention mechanisms to achieve this, with a strong emphasis on tuning-free methods for efficient and flexible personalization. This field is crucial for advancing applications such as personalized avatars, realistic video editing, and even mitigating the risks associated with deepfakes by improving the detection of manipulated media.
Papers
ID-Aligner: Enhancing Identity-Preserving Text-to-Image Generation with Reward Feedback Learning
Weifeng Chen, Jiacheng Zhang, Jie Wu, Hefeng Wu, Xuefeng Xiao, Liang Lin
ID-Animator: Zero-Shot Identity-Preserving Human Video Generation
Xuanhua He, Quande Liu, Shengju Qian, Xin Wang, Tao Hu, Ke Cao, Keyu Yan, Jie Zhang