Self Supervised Pretraining
Self-supervised pretraining (SSP) leverages vast amounts of unlabeled data to learn robust feature representations before fine-tuning on specific downstream tasks, thereby mitigating the need for extensive labeled datasets. Current research focuses on applying SSP to diverse domains, including medical imaging (using architectures like Vision Transformers and masked autoencoders), time series analysis, and remote sensing, often comparing its effectiveness against supervised learning approaches. The success of SSP in improving model performance, particularly in low-data regimes, and enhancing robustness to noise and domain shifts, highlights its significant impact on various fields by enabling efficient and effective model training with limited labeled data.
Papers
Self-supervised Pretraining and Transfer Learning Enable Flu and COVID-19 Predictions in Small Mobile Sensing Datasets
Mike A. Merrill, Tim Althoff
AI for Porosity and Permeability Prediction from Geologic Core X-Ray Micro-Tomography
Zangir Iklassov, Dmitrii Medvedev, Otabek Nazarov, Shakhboz Razzokov
Learning to segment with limited annotations: Self-supervised pretraining with regression and contrastive loss in MRI
Lavanya Umapathy, Zhiyang Fu, Rohit Philip, Diego Martin, Maria Altbach, Ali Bilgin