Audio Driven
Audio-driven research focuses on understanding and generating audio signals, often in conjunction with other modalities like text and video. Current efforts concentrate on developing robust models for tasks such as audio-visual representation learning, talking head synthesis (using diffusion models and autoencoders), and audio-to-text/text-to-audio generation (leveraging large language models and neural codecs). These advancements have significant implications for various fields, including film-making, virtual reality, assistive technologies, and multimedia forensics, by enabling more realistic and interactive audio-visual experiences and improving analysis of audio-visual data.
Papers
November 14, 2024
November 5, 2024
November 4, 2024
November 1, 2024
October 31, 2024
October 21, 2024
October 18, 2024
October 17, 2024
October 16, 2024
October 14, 2024
October 10, 2024
October 8, 2024
October 7, 2024
October 4, 2024
October 1, 2024
September 27, 2024
September 26, 2024
September 24, 2024