Multi Modality
Multimodality in machine learning focuses on integrating information from diverse data sources (e.g., text, images, audio, sensor data) to improve model performance and robustness. Current research emphasizes developing effective fusion strategies within various model architectures, including transformers and autoencoders, often employing contrastive learning and techniques to handle missing modalities. This approach is proving valuable across numerous applications, from medical diagnosis and e-commerce to assistive robotics and urban planning, by enabling more comprehensive and accurate analyses than unimodal methods.
Papers
May 7, 2023
May 3, 2023
May 2, 2023
April 14, 2023
April 3, 2023
March 17, 2023
February 26, 2023
February 22, 2023
December 19, 2022
November 8, 2022
October 28, 2022
October 25, 2022
October 13, 2022
October 7, 2022
September 16, 2022
August 26, 2022
July 26, 2022
July 20, 2022
July 14, 2022