Self Supervised Learning
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by formulating pretext tasks that encourage the model to learn useful representations. Current research focuses on improving SSL's performance and generalization across diverse data types (images, audio, graphs, point clouds) and downstream tasks, employing techniques like contrastive learning, masked autoencoders, and generative models within various architectures such as transformers and convolutional neural networks. These advancements are significant because they reduce the reliance on expensive and time-consuming data labeling, enabling the development of robust models for applications ranging from medical image analysis and speech recognition to geospatial AI and protein function prediction. The efficiency gains from SSL are also a key focus, with research exploring optimal model and data sizes for given computational budgets.
Papers
Open Implementation and Study of BEST-RQ for Speech Processing
Ryan Whetten, Titouan Parcollet, Marco Dinarelli, Yannick Estève
Exploring Correlations of Self-Supervised Tasks for Graphs
Taoran Fang, Wei Zhou, Yifei Sun, Kaiqiao Han, Lvbin Ma, Yang Yang
FedSC: Provable Federated Self-supervised Learning with Spectral Contrastive Objective over Non-i.i.d. Data
Shusen Jing, Anlan Yu, Shuai Zhang, Songyang Zhang
Collecting Consistently High Quality Object Tracks with Minimal Human Involvement by Using Self-Supervised Learning to Detect Tracker Errors
Samreen Anjum, Suyog Jain, Danna Gurari
Classification of Breast Cancer Histopathology Images using a Modified Supervised Contrastive Learning Method
Matina Mahdizadeh Sani, Ali Royat, Mahdieh Soleymani Baghshah
Multi-Modality Spatio-Temporal Forecasting via Self-Supervised Learning
Jiewen Deng, Renhe Jiang, Jiaqi Zhang, Xuan Song
Self-Supervised Learning for Real-World Super-Resolution from Dual and Multiple Zoomed Observations
Zhilu Zhang, Ruohao Wang, Hongzhi Zhang, Wangmeng Zuo
TIPAA-SSL: Text Independent Phone-to-Audio Alignment based on Self-Supervised Learning and Knowledge Transfer
Noé Tits, Prernna Bhatnagar, Thierry Dutoit
Self-Supervised Learning for Interventional Image Analytics: Towards Robust Device Trackers
Saahil Islam, Venkatesh N. Murthy, Dominik Neumann, Badhan Kumar Das, Puneet Sharma, Andreas Maier, Dorin Comaniciu, Florin C. Ghesu
Explicitly Modeling Universality into Self-Supervised Learning
Jingyao Wang, Wenwen Qiang, Zeen Song, Lingyu Si, Jiangmeng Li, Changwen Zheng, Bing Su