Audio Visual
Audio-visual research focuses on understanding and leveraging the interplay between audio and visual information, primarily aiming to improve multimodal understanding and generation. Current research emphasizes developing sophisticated models, often employing transformer architectures and diffusion models, to achieve tasks like video-to-audio generation, audio-visual speech recognition, and emotion analysis from combined audio-visual data. This field is significant for its potential applications in various domains, including media production, accessibility technologies, and even mental health diagnostics, by enabling more robust and nuanced analysis of multimedia content.
Papers
Neural Speech Tracking in a Virtual Acoustic Environment: Audio-Visual Benefit for Unscripted Continuous Speech
Mareike Daeglau, Juergen Otten, Giso Grimm, Bojana Mirkovic, Volker Hohmann, Stefan Debener
AVS-Mamba: Exploring Temporal and Multi-modal Mamba for Audio-Visual Segmentation
Sitong Gong, Yunzhi Zhuge, Lu Zhang, Yifan Wang, Pingping Zhang, Lijun Wang, Huchuan Lu
JoVALE: Detecting Human Actions in Video Using Audiovisual and Language Contexts
Taein Son, Soo Won Seo, Jisong Kim, Seok Hwan Lee, Jun Won Choi
Query-centric Audio-Visual Cognition Network for Moment Retrieval, Segmentation and Step-Captioning
Yunbin Tu, Liang Li, Li Su, Qingming Huang
SAVGBench: Benchmarking Spatially Aligned Audio-Video Generation
Kazuki Shimada, Christian Simon, Takashi Shibuya, Shusuke Takahashi, Yuki Mitsufuji