Self Supervised Learning
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by formulating pretext tasks that encourage the model to learn useful representations. Current research focuses on improving SSL's performance and generalization across diverse data types (images, audio, graphs, point clouds) and downstream tasks, employing techniques like contrastive learning, masked autoencoders, and generative models within various architectures such as transformers and convolutional neural networks. These advancements are significant because they reduce the reliance on expensive and time-consuming data labeling, enabling the development of robust models for applications ranging from medical image analysis and speech recognition to geospatial AI and protein function prediction. The efficiency gains from SSL are also a key focus, with research exploring optimal model and data sizes for given computational budgets.
Papers
Multimodality Multi-Lead ECG Arrhythmia Classification using Self-Supervised Learning
Thinh Phan, Duc Le, Patel Brijesh, Donald Adjeroh, Jingxian Wu, Morten Olgaard Jensen, Ngan Le
Match to Win: Analysing Sequences Lengths for Efficient Self-supervised Learning in Speech and Audio
Yan Gao, Javier Fernandez-Marques, Titouan Parcollet, Pedro P. B. de Gusmao, Nicholas D. Lane
Variance Covariance Regularization Enforces Pairwise Independence in Self-Supervised Representations
Grégoire Mialon, Randall Balestriero, Yann LeCun
Joint Embedding Self-Supervised Learning in the Kernel Regime
Bobak T. Kiani, Randall Balestriero, Yubei Chen, Seth Lloyd, Yann LeCun
Domain-aware Self-supervised Pre-training for Label-Efficient Meme Analysis
Shivam Sharma, Mohd Khizir Siddiqui, Md. Shad Akhtar, Tanmoy Chakraborty
Self-Supervised Learning with an Information Maximization Criterion
Serdar Ozsoy, Shadi Hamdan, Sercan Ö. Arik, Deniz Yuret, Alper T. Erdogan
MetaMask: Revisiting Dimensional Confounder for Self-Supervised Learning
Jiangmeng Li, Wenwen Qiang, Yanan Zhang, Wenyi Mo, Changwen Zheng, Bing Su, Hui Xiong
Self-Supervised Learning of Phenotypic Representations from Cell Images with Weak Labels
Jan Oscar Cross-Zamirski, Guy Williams, Elizabeth Mouchet, Carola-Bibiane Schönlieb, Riku Turkki, Yinhai Wang
Spatial-then-Temporal Self-Supervised Learning for Video Correspondence
Rui Li, Dong Liu