Cross Modality
Cross-modality research focuses on integrating information from different data types (e.g., images, text, audio) to improve model performance and understanding. Current research emphasizes developing robust methods for handling inconsistencies between modalities, particularly using techniques like contrastive learning, generative adversarial networks (GANs), and vision transformers, often within frameworks of unsupervised domain adaptation or self-training. This field is significant for advancing medical image analysis (e.g., improved segmentation and diagnosis), autonomous driving, and other applications requiring the fusion of heterogeneous data sources, ultimately leading to more accurate and reliable systems.
Papers
January 17, 2024
January 5, 2024
November 25, 2023
November 20, 2023
November 16, 2023
October 15, 2023
October 9, 2023
September 15, 2023
September 13, 2023
September 12, 2023
September 11, 2023
August 23, 2023
August 18, 2023
August 17, 2023
August 10, 2023
August 7, 2023
July 19, 2023
July 10, 2023