Self Supervised Pretraining
Self-supervised pretraining (SSP) leverages vast amounts of unlabeled data to learn robust feature representations before fine-tuning on specific downstream tasks, thereby mitigating the need for extensive labeled datasets. Current research focuses on applying SSP to diverse domains, including medical imaging (using architectures like Vision Transformers and masked autoencoders), time series analysis, and remote sensing, often comparing its effectiveness against supervised learning approaches. The success of SSP in improving model performance, particularly in low-data regimes, and enhancing robustness to noise and domain shifts, highlights its significant impact on various fields by enabling efficient and effective model training with limited labeled data.
Papers
ASiT: Local-Global Audio Spectrogram vIsion Transformer for Event Classification
Sara Atito, Muhammad Awais, Wenwu Wang, Mark D Plumbley, Josef Kittler
SPCXR: Self-supervised Pretraining using Chest X-rays Towards a Domain Specific Foundation Model
Syed Muhammad Anwar, Abhijeet Parida, Sara Atito, Muhammad Awais, Gustavo Nino, Josef Kitler, Marius George Linguraru
Can we Adopt Self-supervised Pretraining for Chest X-Rays?
Arsh Verma, Makarand Tapaswi