Cross Modal Matching
Cross-modal matching focuses on aligning and comparing data from different modalities, such as images and text, to enable tasks like image retrieval using textual descriptions or semantic segmentation guided by captions. Current research emphasizes robust methods that handle noisy or incomplete data, often employing transformer-based architectures and contrastive learning techniques to improve the accuracy of cross-modal similarity measurement. These advancements are crucial for improving various applications, including scene understanding, information retrieval, and human-computer interaction, by enabling more effective integration of diverse data sources.
Papers
October 31, 2024
October 29, 2024
October 23, 2024
May 7, 2024
March 30, 2024
March 28, 2024
March 20, 2024
March 13, 2024
February 28, 2024
February 17, 2024
December 10, 2023
September 25, 2023
September 15, 2023
March 22, 2023
July 6, 2022
June 30, 2022
January 18, 2022