Non Contrastive Self Supervised Learning

Non-contrastive self-supervised learning (NC-SSL) aims to learn robust and generalizable representations from unlabeled data without relying on comparisons between similar and dissimilar samples, unlike contrastive methods. Current research focuses on understanding and mitigating failure modes like representation collapse, improving sample efficiency through optimized augmentation strategies and lower-dimensional projector heads, and exploring the effectiveness of NC-SSL across diverse data modalities, including images, speech, and biomedical signals. This approach holds significant promise for applications where labeled data is scarce or expensive, particularly in medical imaging and other fields with privacy concerns, by enabling the development of high-performing models with minimal human annotation.

Papers