Paper ID: 2502.17198 • Published Feb 24, 2025
Dimitra: Audio-driven Diffusion model for Expressive Talking Head Generation
Baptiste Chopin, Tashvik Dhamija, Pranav Balaji, Yaohui Wang, Antitza Dantcheva
Universit´e C ˆote d’Azur, Inria, STARS Team•Shanghai Artificial Intelligence Laboratory
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
We propose Dimitra, a novel framework for audio-driven talking head
generation, streamlined to learn lip motion, facial expression, as well as head
pose motion. Specifically, we train a conditional Motion Diffusion Transformer
(cMDT) by modeling facial motion sequences with 3D representation. We condition
the cMDT with only two input signals, an audio-sequence, as well as a reference
facial image. By extracting additional features directly from audio, Dimitra is
able to increase quality and realism of generated videos. In particular,
phoneme sequences contribute to the realism of lip motion, whereas text
transcript to facial expression and head pose realism. Quantitative and
qualitative experiments on two widely employed datasets, VoxCeleb2 and HDTF,
showcase that Dimitra is able to outperform existing approaches for generating
realistic talking heads imparting lip motion, facial expression, and head pose.
Figures & Tables
Unlock access to paper figures and tables to enhance your research experience.