Audio Visual
Audio-visual research focuses on understanding and leveraging the interplay between audio and visual information, primarily aiming to improve multimodal understanding and generation. Current research emphasizes developing sophisticated models, often employing transformer architectures and diffusion models, to achieve tasks like video-to-audio generation, audio-visual speech recognition, and emotion analysis from combined audio-visual data. This field is significant for its potential applications in various domains, including media production, accessibility technologies, and even mental health diagnostics, by enabling more robust and nuanced analysis of multimedia content.
Papers
From Vision to Audio and Beyond: A Unified Model for Audio-Visual Representation and Generation
Kun Su, Xiulong Liu, Eli Shlizerman
Beyond Single-Audio: Advancing Multi-Audio Processing in Audio Large Language Models
Yiming Chen, Xianghu Yue, Xiaoxue Gao, Chen Zhang, Luis Fernando D'Haro, Robby T. Tan, Haizhou Li
Human-Inspired Audio-Visual Speech Recognition: Spike Activity, Cueing Interaction and Causal Processing
Qianhui Liu, Jiadong Wang, Yang Wang, Xin Yang, Gang Pan, Haizhou Li
Enhancing Sound Source Localization via False Negative Elimination
Zengjie Song, Jiangshe Zhang, Yuxi Wang, Junsong Fan, Zhaoxiang Zhang