Self Supervised Method
Self-supervised learning (SSL) aims to train models on unlabeled data by formulating pretext tasks that implicitly capture underlying data structure. Current research focuses on improving the efficiency and robustness of SSL across diverse modalities (image, audio, video, medical imaging), exploring architectures like transformers and autoencoders, and employing techniques such as contrastive learning, masked image modeling, and clustering. These advancements are significant because they reduce reliance on expensive labeled datasets, enabling the development of powerful models for various applications, including speech recognition, image reconstruction, and medical image analysis.
Papers
August 23, 2022
July 29, 2022
July 1, 2022
June 30, 2022
June 29, 2022
June 7, 2022
May 31, 2022
May 20, 2022
May 16, 2022
May 11, 2022
April 4, 2022
March 27, 2022
March 21, 2022
March 8, 2022
March 4, 2022
March 1, 2022
February 18, 2022
December 30, 2021
December 14, 2021