Self Supervised Learning
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by formulating pretext tasks that encourage the model to learn useful representations. Current research focuses on improving SSL's performance and generalization across diverse data types (images, audio, graphs, point clouds) and downstream tasks, employing techniques like contrastive learning, masked autoencoders, and generative models within various architectures such as transformers and convolutional neural networks. These advancements are significant because they reduce the reliance on expensive and time-consuming data labeling, enabling the development of robust models for applications ranging from medical image analysis and speech recognition to geospatial AI and protein function prediction. The efficiency gains from SSL are also a key focus, with research exploring optimal model and data sizes for given computational budgets.
Papers
FitHuBERT: Going Thinner and Deeper for Knowledge Distillation of Speech Self-Supervised Learning
Yeonghyeon Lee, Kangwook Jang, Jahyun Goo, Youngmoon Jung, Hoirin Kim
Dissecting Self-Supervised Learning Methods for Surgical Computer Vision
Sanat Ramesh, Vinkle Srivastav, Deepak Alapatt, Tong Yu, Aditya Murali, Luca Sestini, Chinedu Innocent Nwoye, Idris Hamoud, Saurav Sharma, Antoine Fleurentin, Georgios Exarchakis, Alexandros Karargyris, Nicolas Padoy
Guillotine Regularization: Why removing layers is needed to improve generalization in Self-Supervised Learning
Florian Bordes, Randall Balestriero, Quentin Garrido, Adrien Bardes, Pascal Vincent
Self-supervised Learning in Remote Sensing: A Review
Yi Wang, Conrad M Albrecht, Nassim Ait Ali Braham, Lichao Mou, Xiao Xiang Zhu
Predicting within and across language phoneme recognition performance of self-supervised learning speech pre-trained models
Hang Ji, Tanvina Patel, Odette Scharenborg
Self Supervised Learning for Few Shot Hyperspectral Image Classification
Nassim Ait Ali Braham, Lichao Mou, Jocelyn Chanussot, Julien Mairal, Xiao Xiang Zhu
Exploring the Effectiveness of Self-supervised Learning and Classifier Chains in Emotion Recognition of Nonverbal Vocalizations
Detai Xin, Shinnosuke Takamichi, Hiroshi Saruwatari
Imitation Learning for Nonprehensile Manipulation through Self-Supervised Learning Considering Motion Speed
Yuki Saigusa, Sho Sakaino, Toshiaki Tsuji
Analysis of Self-Supervised Learning and Dimensionality Reduction Methods in Clustering-Based Active Learning for Speech Emotion Recognition
Einari Vaaras, Manu Airaksinen, Okko Räsänen