Audio Driven
Audio-driven research focuses on understanding and generating audio signals, often in conjunction with other modalities like text and video. Current efforts concentrate on developing robust models for tasks such as audio-visual representation learning, talking head synthesis (using diffusion models and autoencoders), and audio-to-text/text-to-audio generation (leveraging large language models and neural codecs). These advancements have significant implications for various fields, including film-making, virtual reality, assistive technologies, and multimedia forensics, by enabling more realistic and interactive audio-visual experiences and improving analysis of audio-visual data.
Papers
January 3, 2024
December 28, 2023
December 18, 2023
December 15, 2023
December 14, 2023
December 7, 2023
December 5, 2023
December 4, 2023
November 27, 2023
November 13, 2023
November 8, 2023
November 1, 2023
October 30, 2023
October 23, 2023
October 9, 2023
October 5, 2023
September 30, 2023
September 26, 2023