Multimodal Learning
Multimodal learning aims to improve machine learning performance by integrating data from multiple sources, such as text, images, and audio, to create richer, more robust representations. Current research focuses on addressing challenges like missing modalities (developing models resilient to incomplete data), modality imbalance (ensuring fair contribution from all modalities), and efficient fusion techniques (e.g., dynamic anchor methods, single-branch networks, and various attention mechanisms). This field is significant because it enables more accurate and contextually aware systems across diverse applications, including healthcare diagnostics, recommendation systems, and video understanding.
Papers
Assessing Modality Bias in Video Question Answering Benchmarks with Multimodal Large Language Models
Jean Park, Kuk Jin Jang, Basam Alasaly, Sriharsha Mopidevi, Andrew Zolensky, Eric Eaton, Insup Lee, Kevin Johnson
Multimodal Methods for Analyzing Learning and Training Environments: A Systematic Literature Review
Clayton Cohn, Eduardo Davalos, Caleb Vatral, Joyce Horn Fonteles, Hanchen David Wang, Meiyi Ma, Gautam Biswas
CACE-Net: Co-guidance Attention and Contrastive Enhancement for Effective Audio-Visual Event Localization
Xiang He, Xiangxi Liu, Yang Li, Dongcheng Zhao, Guobin Shen, Qingqun Kong, Xin Yang, Yi Zeng
Distribution-Level Memory Recall for Continual Learning: Preserving Knowledge and Avoiding Confusion
Shaoxu Cheng, Kanglei Geng, Chiyuan He, Zihuan Qiu, Linfeng Xu, Heqian Qiu, Lanxiao Wang, Qingbo Wu, Fanman Meng, Hongliang Li