Self Supervised Method
Self-supervised learning (SSL) aims to train models on unlabeled data by formulating pretext tasks that implicitly capture underlying data structure. Current research focuses on improving the efficiency and robustness of SSL across diverse modalities (image, audio, video, medical imaging), exploring architectures like transformers and autoencoders, and employing techniques such as contrastive learning, masked image modeling, and clustering. These advancements are significant because they reduce reliance on expensive labeled datasets, enabling the development of powerful models for various applications, including speech recognition, image reconstruction, and medical image analysis.
Papers
April 10, 2023
April 8, 2023
April 5, 2023
March 28, 2023
March 23, 2023
January 30, 2023
January 23, 2023
December 1, 2022
November 29, 2022
November 27, 2022
November 18, 2022
October 13, 2022
September 30, 2022
September 22, 2022
September 21, 2022
September 19, 2022
September 16, 2022
September 2, 2022
September 1, 2022