Self Supervised Contrastive Learning
Self-supervised contrastive learning aims to learn robust feature representations from unlabeled data by contrasting similar and dissimilar data points. Current research focuses on improving the efficiency and effectiveness of this learning process, exploring techniques like synthetic hard negative generation, novel loss functions (e.g., incorporating local alignment or f-divergences), and adaptive batch processing to enhance representation quality. This approach has shown significant promise across diverse applications, including image classification, video analysis, medical image segmentation, and time series forecasting, by reducing the reliance on large labeled datasets and improving model generalizability.
Papers
February 15, 2024
February 6, 2024
February 5, 2024
February 3, 2024
January 27, 2024
January 26, 2024
December 25, 2023
November 21, 2023
November 16, 2023
September 21, 2023
September 15, 2023
August 27, 2023
August 9, 2023
August 2, 2023
July 27, 2023
July 18, 2023
June 28, 2023
June 26, 2023
June 21, 2023
June 7, 2023