Self Supervised Method
Self-supervised learning (SSL) aims to train models on unlabeled data by formulating pretext tasks that implicitly capture underlying data structure. Current research focuses on improving the efficiency and robustness of SSL across diverse modalities (image, audio, video, medical imaging), exploring architectures like transformers and autoencoders, and employing techniques such as contrastive learning, masked image modeling, and clustering. These advancements are significant because they reduce reliance on expensive labeled datasets, enabling the development of powerful models for various applications, including speech recognition, image reconstruction, and medical image analysis.
Papers
October 14, 2024
September 25, 2024
September 3, 2024
August 31, 2024
August 14, 2024
August 4, 2024
July 15, 2024
June 25, 2024
June 18, 2024
June 4, 2024
May 27, 2024
May 8, 2024
April 2, 2024
March 31, 2024
March 1, 2024
February 25, 2024
February 2, 2024
January 16, 2024
December 2, 2023