Noisy Correspondence Learning
Noisy correspondence learning tackles the challenge of aligning data across different modalities (e.g., images and text) when the pairings are imperfect or contain errors. Current research focuses on developing robust models that can handle these noisy correspondences, employing techniques like information-theoretic frameworks to disentangle relevant from irrelevant information, pseudo-classification strategies to improve supervision, and memory-based approaches to mitigate error accumulation. This field is crucial for advancing cross-modal retrieval and related applications, as it enables the effective use of readily available but imperfect data, thereby reducing the reliance on expensive, perfectly annotated datasets.
Papers
August 10, 2024
August 2, 2024
March 13, 2024
January 30, 2024
December 27, 2023
August 19, 2023
April 13, 2023