Self Supervised Contrastive Learning
Self-supervised contrastive learning aims to learn robust feature representations from unlabeled data by contrasting similar and dissimilar data points. Current research focuses on improving the efficiency and effectiveness of this learning process, exploring techniques like synthetic hard negative generation, novel loss functions (e.g., incorporating local alignment or f-divergences), and adaptive batch processing to enhance representation quality. This approach has shown significant promise across diverse applications, including image classification, video analysis, medical image segmentation, and time series forecasting, by reducing the reliance on large labeled datasets and improving model generalizability.
Papers
October 4, 2022
August 10, 2022
August 8, 2022
July 23, 2022
June 27, 2022
June 21, 2022
May 30, 2022
May 25, 2022
May 13, 2022
April 21, 2022
March 31, 2022
March 30, 2022
March 22, 2022
March 16, 2022
February 28, 2022
February 25, 2022
December 31, 2021