Self Supervised Contrastive Learning
Self-supervised contrastive learning aims to learn robust feature representations from unlabeled data by contrasting similar and dissimilar data points. Current research focuses on improving the efficiency and effectiveness of this learning process, exploring techniques like synthetic hard negative generation, novel loss functions (e.g., incorporating local alignment or f-divergences), and adaptive batch processing to enhance representation quality. This approach has shown significant promise across diverse applications, including image classification, video analysis, medical image segmentation, and time series forecasting, by reducing the reliance on large labeled datasets and improving model generalizability.
Papers
October 23, 2024
October 16, 2024
October 14, 2024
October 9, 2024
October 3, 2024
September 17, 2024
September 6, 2024
August 20, 2024
August 14, 2024
August 9, 2024
August 3, 2024
June 20, 2024
May 28, 2024
April 16, 2024
April 14, 2024
March 28, 2024
March 26, 2024
February 22, 2024
February 21, 2024