Robust Multimodal
Robust multimodal learning aims to build systems that effectively combine information from multiple data sources (e.g., images, text, audio) while maintaining performance even when some data is missing or corrupted. Current research focuses on developing model architectures and training strategies that address this robustness challenge, including methods that leverage masked modality projection, conditional hypernetworks, and representation decoupling to handle varying input sizes and missing modalities. This field is significant because it enables the development of more reliable and practical AI systems for applications like healthcare diagnostics, sentiment analysis, and autonomous driving, where incomplete or noisy data is common.
Papers
October 3, 2024
July 30, 2024
July 23, 2024
July 5, 2024
June 20, 2024
April 2, 2024
February 9, 2024
January 20, 2024
October 19, 2023
October 6, 2023
July 7, 2023
June 2, 2023
May 30, 2023
December 15, 2022
May 30, 2022