Facial Animation
Facial animation research focuses on realistically and efficiently generating 3D facial movements from various inputs like speech, video, or text. Current efforts concentrate on improving realism, diversity, and controllability of animations, often employing generative models like Variational Autoencoders (VAEs) and diffusion models, along with transformer networks for processing sequential data. These advancements are driving progress in applications such as virtual reality, video games, and film, enabling more expressive and lifelike virtual characters and avatars. The development of large, high-quality datasets is also a key area of focus, facilitating the training of more robust and accurate models.
Papers
Universal Facial Encoding of Codec Avatars from VR Headsets
Shaojie Bai, Te-Li Wang, Chenghui Li, Akshay Venkatesh, Tomas Simon, Chen Cao, Gabriel Schwartz, Ryan Wrench, Jason Saragih, Yaser Sheikh, Shih-En Wei
EmoFace: Audio-driven Emotional 3D Face Animation
Chang Liu, Qunfen Lin, Zijiao Zeng, Ye Pan