Self Supervised Pretraining
Self-supervised pretraining (SSP) leverages vast amounts of unlabeled data to learn robust feature representations before fine-tuning on specific downstream tasks, thereby mitigating the need for extensive labeled datasets. Current research focuses on applying SSP to diverse domains, including medical imaging (using architectures like Vision Transformers and masked autoencoders), time series analysis, and remote sensing, often comparing its effectiveness against supervised learning approaches. The success of SSP in improving model performance, particularly in low-data regimes, and enhancing robustness to noise and domain shifts, highlights its significant impact on various fields by enabling efficient and effective model training with limited labeled data.
Papers
Self-Supervised Learning in Electron Microscopy: Towards a Foundation Model for Advanced Image Analysis
Bashir Kazimi, Karina Ruzaeva, Stefan Sandfeld
Downstream Task Guided Masking Learning in Masked Autoencoders Using Multi-Level Optimization
Han Guo, Ramtin Hosseini, Ruiyi Zhang, Sai Ashish Somayajula, Ranak Roy Chowdhury, Rajesh K. Gupta, Pengtao Xie