Self Supervised Task
Self-supervised learning aims to train models on unlabeled data by creating pretext tasks that implicitly capture underlying data structure, improving downstream performance on labeled tasks. Current research focuses on developing effective pretext tasks for various data modalities (images, text, graphs, time series) and integrating self-supervision with existing architectures like transformers, autoencoders, and GANs, often within multi-task or meta-learning frameworks. This approach is significant because it reduces reliance on expensive labeled data, leading to more efficient and robust models across diverse applications, including image classification, natural language processing, and anomaly detection.
Papers
October 17, 2024
October 15, 2024
October 10, 2024
October 8, 2024
August 11, 2024
July 17, 2024
July 15, 2024
July 7, 2024
July 3, 2024
May 31, 2024
May 7, 2024
February 28, 2024
February 1, 2024
January 12, 2024
December 19, 2023
December 7, 2023
November 28, 2023
November 9, 2023
October 20, 2023