Dynamic Portrait
Dynamic portrait generation aims to create realistic and controllable animated portraits from various input sources, such as audio, monocular videos, or still images. Current research focuses on developing sophisticated models, often employing 3D morphable models, Gaussian splatting, or diffusion-based methods, to achieve high-fidelity reconstructions with accurate lip synchronization and expressive control, often leveraging techniques like tri-plane generation and hierarchical audio-visual synthesis. These advancements are significant for applications in virtual reality, augmented reality, and digital avatar creation, offering improved realism and user interaction capabilities.
Papers
July 11, 2024
June 27, 2024
June 13, 2024
March 31, 2024
February 6, 2024