Self Supervised Learning
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by formulating pretext tasks that encourage the model to learn useful representations. Current research focuses on improving SSL's performance and generalization across diverse data types (images, audio, graphs, point clouds) and downstream tasks, employing techniques like contrastive learning, masked autoencoders, and generative models within various architectures such as transformers and convolutional neural networks. These advancements are significant because they reduce the reliance on expensive and time-consuming data labeling, enabling the development of robust models for applications ranging from medical image analysis and speech recognition to geospatial AI and protein function prediction. The efficiency gains from SSL are also a key focus, with research exploring optimal model and data sizes for given computational budgets.
Papers
MV-MR: multi-views and multi-representations for self-supervised learning and knowledge distillation
Vitaliy Kinakh, Mariia Drozdova, Slava Voloshynovskiy
Self-supervised learning of a tailored Convolutional Auto Encoder for histopathological prostate grading
Zahra Tabatabaei, Adrian colomer, Kjersti Engan, Javier Oliver, Valery Naranjo
Unified Mask Embedding and Correspondence Learning for Self-Supervised Video Segmentation
Liulei Li, Wenguan Wang, Tianfei Zhou, Jianwu Li, Yi Yang
On the Effects of Self-supervision and Contrastive Alignment in Deep Multi-view Clustering
Daniel J. Trosten, Sigurd Løkse, Robert Jenssen, Michael C. Kampffmeyer
CSSL-MHTR: Continual Self-Supervised Learning for Scalable Multi-script Handwritten Text Recognition
Marwa Dhiaf, Mohamed Ali Souibgui, Kai Wang, Yuyang Liu, Yousri Kessentini, Alicia Fornés, Ahmed Cheikh Rouhou
SSL-Cleanse: Trojan Detection and Mitigation in Self-Supervised Learning
Mengxin Zheng, Jiaqi Xue, Zihao Wang, Xun Chen, Qian Lou, Lei Jiang, Xiaofeng Wang
Feature propagation as self-supervision signals on graphs
Oscar Pina, Verónica Vilaplana
Task-specific Fine-tuning via Variational Information Bottleneck for Weakly-supervised Pathology Whole Slide Image Classification
Honglin Li, Chenglu Zhu, Yunlong Zhang, Yuxuan Sun, Zhongyi Shui, Wenwei Kuang, Sunyi Zheng, Lin Yang
Efficient Self-supervised Continual Learning with Progressive Task-correlated Layer Freezing
Li Yang, Sen Lin, Fan Zhang, Junshan Zhang, Deliang Fan
FireRisk: A Remote Sensing Dataset for Fire Risk Assessment with Benchmarks Using Supervised and Self-supervised Learning
Shuchang Shen, Sachith Seneviratne, Xinye Wanyan, Michael Kirley
Three Guidelines You Should Know for Universally Slimmable Self-Supervised Learning
Yun-Hao Cao, Peiqin Sun, Shuchang Zhou
Functional Knowledge Transfer with Self-supervised Representation Learning
Prakash Chandra Chhipa, Muskaan Chopra, Gopal Mengi, Varun Gupta, Richa Upadhyay, Meenakshi Subhash Chippa, Kanjar De, Rajkumar Saini, Seiichi Uchida, Marcus Liwicki
Extending global-local view alignment for self-supervised learning with remote sensing imagery
Xinye Wanyan, Sachith Seneviratne, Shuchang Shen, Michael Kirley