Noisy Correspondence Learning

Noisy correspondence learning tackles the challenge of aligning data across different modalities (e.g., images and text) when the pairings are imperfect or contain errors. Current research focuses on developing robust models that can handle these noisy correspondences, employing techniques like information-theoretic frameworks to disentangle relevant from irrelevant information, pseudo-classification strategies to improve supervision, and memory-based approaches to mitigate error accumulation. This field is crucial for advancing cross-modal retrieval and related applications, as it enables the effective use of readily available but imperfect data, thereby reducing the reliance on expensive, perfectly annotated datasets.

Papers