Self Supervised Task
Self-supervised learning aims to train models on unlabeled data by creating pretext tasks that implicitly capture underlying data structure, improving downstream performance on labeled tasks. Current research focuses on developing effective pretext tasks for various data modalities (images, text, graphs, time series) and integrating self-supervision with existing architectures like transformers, autoencoders, and GANs, often within multi-task or meta-learning frameworks. This approach is significant because it reduces reliance on expensive labeled data, leading to more efficient and robust models across diverse applications, including image classification, natural language processing, and anomaly detection.
Papers
November 9, 2023
October 20, 2023
July 11, 2023
July 5, 2023
June 6, 2023
May 18, 2023
April 14, 2023
April 5, 2023
March 17, 2023
March 8, 2023
March 3, 2023
January 1, 2023
December 16, 2022
October 27, 2022
October 20, 2022
September 16, 2022
September 2, 2022
May 31, 2022
May 28, 2022