Blendshape Model
Blendshape models represent facial expressions as weighted combinations of base shapes, enabling realistic animation of virtual characters. Current research focuses on improving the accuracy and efficiency of blendshape generation from various input sources, including single images, sparse videos, and even synthetic data, often employing deep learning techniques and inverse rendering methods to achieve high-fidelity results. These advancements are driving progress in applications such as virtual reality, gaming, and film, offering more efficient and customizable facial animation pipelines. The development of lightweight, real-time capable models is a significant area of focus, enabling deployment on mobile devices.
Papers
February 21, 2024
January 16, 2024
September 11, 2023
June 21, 2023