Self Supervised Learning
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by formulating pretext tasks that encourage the model to learn useful representations. Current research focuses on improving SSL's performance and generalization across diverse data types (images, audio, graphs, point clouds) and downstream tasks, employing techniques like contrastive learning, masked autoencoders, and generative models within various architectures such as transformers and convolutional neural networks. These advancements are significant because they reduce the reliance on expensive and time-consuming data labeling, enabling the development of robust models for applications ranging from medical image analysis and speech recognition to geospatial AI and protein function prediction. The efficiency gains from SSL are also a key focus, with research exploring optimal model and data sizes for given computational budgets.
Papers
MASR: Multi-label Aware Speech Representation
Anjali Raj, Shikhar Bharadwaj, Sriram Ganapathy, Min Ma, Shikhar Vashishth
Language-based Action Concept Spaces Improve Video Self-Supervised Learning
Kanchana Ranasinghe, Michael Ryoo
Revisiting Fine-Tuning Strategies for Self-supervised Medical Imaging Analysis
Muhammad Osama Khan, Yi Fang
Self2Self+: Single-Image Denoising with Self-Supervised Learning and Image Quality Assessment Loss
Jaekyun Ko, Sanghwan Lee
MOCA: Self-supervised Representation Learning by Predicting Masked Online Codebook Assignments
Spyros Gidaris, Andrei Bursuc, Oriane Simeoni, Antonin Vobecky, Nikos Komodakis, Matthieu Cord, Patrick Pérez
Systematic comparison of semi-supervised and self-supervised learning for medical image classification
Zhe Huang, Ruijie Jiang, Shuchin Aeron, Michael C. Hughes
Towards the Sparseness of Projection Head in Self-Supervised Learning
Zeen Song, Xingzhe Su, Jingyao Wang, Wenwen Qiang, Changwen Zheng, Fuchun Sun
L-DAWA: Layer-wise Divergence Aware Weight Aggregation in Federated Self-Supervised Visual Representation Learning
Yasar Abbas Ur Rehman, Yan Gao, Pedro Porto Buarque de Gusmão, Mina Alibeigi, Jiajun Shen, Nicholas D. Lane
FreeCOS: Self-Supervised Learning from Fractals and Unlabeled Images for Curvilinear Object Segmentation
Tianyi Shi, Xiaohuan Ding, Liang Zhang, Xin Yang