Driven 3D
Driven 3D, specifically audio-driven 3D talking head generation, aims to create realistic and expressive virtual humans animated by speech input. Current research focuses on improving lip synchronization, emotional expressiveness, and rendering quality, employing various model architectures including Transformers, neural radiance fields (NeRFs), and structured state space models (SSMs), often incorporating techniques like meta-learning and disentanglement of emotional and content features. This field is significant for applications in virtual reality, augmented reality, and animation, with ongoing efforts to enhance realism, efficiency (e.g., real-time rendering), and generalization across languages and speaking styles.
Papers
October 14, 2024
August 21, 2024
August 18, 2024
August 3, 2024
August 1, 2024
July 8, 2024
June 20, 2024
May 9, 2024
April 29, 2024
March 19, 2024
February 25, 2024
January 16, 2024
November 2, 2023
July 10, 2023
June 2, 2023
May 1, 2023
March 8, 2023
January 31, 2023
October 7, 2022
August 23, 2022