Non Contrastive Self Supervised Learning
Non-contrastive self-supervised learning (NC-SSL) aims to learn robust and generalizable representations from unlabeled data without relying on comparisons between similar and dissimilar samples, unlike contrastive methods. Current research focuses on understanding and mitigating failure modes like representation collapse, improving sample efficiency through optimized augmentation strategies and lower-dimensional projector heads, and exploring the effectiveness of NC-SSL across diverse data modalities, including images, speech, and biomedical signals. This approach holds significant promise for applications where labeled data is scarce or expensive, particularly in medical imaging and other fields with privacy concerns, by enabling the development of high-performing models with minimal human annotation.
Papers
Non-Contrastive Self-supervised Learning for Utterance-Level Information Extraction from Speech
Jaejin Cho, Jes'us Villalba, Laureano Moro-Velazquez, Najim Dehak
Non-Contrastive Self-Supervised Learning of Utterance-Level Speech Representations
Jaejin Cho, Raghavendra Pappagari, Piotr Żelasko, Laureano Moro-Velazquez, Jesús Villalba, Najim Dehak