Self Supervised Contrastive Learning
Self-supervised contrastive learning aims to learn robust feature representations from unlabeled data by contrasting similar and dissimilar data points. Current research focuses on improving the efficiency and effectiveness of this learning process, exploring techniques like synthetic hard negative generation, novel loss functions (e.g., incorporating local alignment or f-divergences), and adaptive batch processing to enhance representation quality. This approach has shown significant promise across diverse applications, including image classification, video analysis, medical image segmentation, and time series forecasting, by reducing the reliance on large labeled datasets and improving model generalizability.
Papers
June 7, 2023
May 30, 2023
May 5, 2023
April 26, 2023
April 12, 2023
March 24, 2023
March 20, 2023
March 2, 2023
February 18, 2023
January 31, 2023
January 27, 2023
December 14, 2022
December 5, 2022
November 25, 2022
November 15, 2022
October 28, 2022
October 23, 2022
October 19, 2022
October 10, 2022