Self Supervised Contrastive Representation
Self-supervised contrastive representation learning aims to learn robust data representations without human-labeled data by contrasting similar and dissimilar data points. Current research focuses on improving the robustness of these methods, particularly by addressing issues like unrepresentative data pairings and the impact of batch size, and exploring their application across diverse domains, including time series analysis, medical imaging, and remote sensing. These techniques are proving valuable for various tasks, such as anomaly detection, image classification, and video retrieval, especially in scenarios with limited labeled data, thereby advancing both scientific understanding and practical applications.
Papers
September 25, 2024
March 28, 2024
November 15, 2023
August 18, 2023
April 19, 2023
March 28, 2023
January 29, 2023
December 3, 2022
August 13, 2022
August 8, 2022
July 19, 2022
June 7, 2022
December 4, 2021