Self Supervised Method
Self-supervised learning (SSL) aims to train models on unlabeled data by formulating pretext tasks that implicitly capture underlying data structure. Current research focuses on improving the efficiency and robustness of SSL across diverse modalities (image, audio, video, medical imaging), exploring architectures like transformers and autoencoders, and employing techniques such as contrastive learning, masked image modeling, and clustering. These advancements are significant because they reduce reliance on expensive labeled datasets, enabling the development of powerful models for various applications, including speech recognition, image reconstruction, and medical image analysis.
Papers
December 2, 2023
December 1, 2023
November 6, 2023
November 5, 2023
October 19, 2023
October 2, 2023
September 14, 2023
September 11, 2023
September 5, 2023
September 3, 2023
August 9, 2023
July 20, 2023
July 18, 2023
June 4, 2023
May 31, 2023
May 30, 2023
May 29, 2023
May 23, 2023
April 19, 2023
April 14, 2023