Audio Visual
Audio-visual research focuses on understanding and leveraging the interplay between audio and visual information, primarily aiming to improve multimodal understanding and generation. Current research emphasizes developing sophisticated models, often employing transformer architectures and diffusion models, to achieve tasks like video-to-audio generation, audio-visual speech recognition, and emotion analysis from combined audio-visual data. This field is significant for its potential applications in various domains, including media production, accessibility technologies, and even mental health diagnostics, by enabling more robust and nuanced analysis of multimedia content.
Papers
Dynamic Cross Attention for Audio-Visual Person Verification
R. Gnana Praveen, Jahangir Alam
Audio-Visual Person Verification based on Recursive Fusion of Joint Cross-Attention
R. Gnana Praveen, Jahangir Alam
CAT: Enhancing Multimodal Large Language Model to Answer Questions in Dynamic Audio-Visual Scenarios
Qilang Ye, Zitong Yu, Rui Shao, Xinyu Xie, Philip Torr, Xiaochun Cao