Robust Contrastive Learning

Robust contrastive learning aims to improve the reliability and robustness of self-supervised learning methods that learn representations from unlabeled data by comparing similar and dissimilar data points. Current research focuses on developing theoretically grounded loss functions, such as those based on Rényi divergence, and incorporating techniques like randomized smoothing and adversarial training to enhance resistance to noise and adversarial attacks. These advancements are significant because they improve the generalizability and reliability of contrastive learning models, leading to better performance in various applications, including image classification, medical image segmentation, and other domains with limited labeled data.

Papers