Self Supervised Learning
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by formulating pretext tasks that encourage the model to learn useful representations. Current research focuses on improving SSL's performance and generalization across diverse data types (images, audio, graphs, point clouds) and downstream tasks, employing techniques like contrastive learning, masked autoencoders, and generative models within various architectures such as transformers and convolutional neural networks. These advancements are significant because they reduce the reliance on expensive and time-consuming data labeling, enabling the development of robust models for applications ranging from medical image analysis and speech recognition to geospatial AI and protein function prediction. The efficiency gains from SSL are also a key focus, with research exploring optimal model and data sizes for given computational budgets.
Papers
Exploring Self-Supervised Representation Learning For Low-Resource Medical Image Analysis
Soumitri Chattopadhyay, Soham Ganguly, Sreejit Chaudhury, Sayan Nag, Samiran Chattopadhyay
Single-photon Image Super-resolution via Self-supervised Learning
Yiwei Chen, Chen Jiang, Yu Pan
Towards Democratizing Joint-Embedding Self-Supervised Learning
Florian Bordes, Randall Balestriero, Pascal Vincent
Evolutionary Augmentation Policy Optimization for Self-supervised Learning
Noah Barrett, Zahra Sadeghi, Stan Matwin
Rethinking the Effect of Data Augmentation in Adversarial Contrastive Learning
Rundong Luo, Yifei Wang, Yisen Wang
ArCL: Enhancing Contrastive Learning with Augmentation-Robust Representations
Xuyang Zhao, Tianqi Du, Yisen Wang, Jun Yao, Weiran Huang
Saliency Guided Contrastive Learning on Scene Images
Meilin Chen, Yizhou Wang, Shixiang Tang, Feng Zhu, Haiyang Yang, Lei Bai, Rui Zhao, Donglian Qi, Wanli Ouyang
Novel Class Discovery: an Introduction and Key Concepts
Colin Troisemaine, Vincent Lemaire, Stéphane Gosselin, Alexandre Reiffers-Masson, Joachim Flocon-Cholet, Sandrine Vaton