Self Supervised Pre Trained Model
Self-supervised pre-trained models (SSPMs) leverage vast amounts of unlabeled data to learn robust feature representations, which are then fine-tuned for various downstream tasks, improving performance, especially in data-scarce scenarios. Current research focuses on optimizing SSPM architectures for specific data types (e.g., time series, images, speech) and exploring effective strategies for knowledge transfer and model compression, including techniques like adapters and structured pruning. The resulting improvements in accuracy and efficiency across diverse applications, such as speech recognition, image clustering, and medical image classification, highlight the significant impact of SSPMs on machine learning.
Papers
August 15, 2024
August 4, 2024
June 12, 2024
March 6, 2024
February 17, 2024
October 28, 2023
October 20, 2023
October 15, 2023
September 21, 2023
September 8, 2023
July 27, 2023
May 24, 2023
May 1, 2023
March 7, 2023
February 27, 2023
January 20, 2023
December 11, 2022
November 23, 2022
November 10, 2022