Self Supervised Learning
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by formulating pretext tasks that encourage the model to learn useful representations. Current research focuses on improving SSL's performance and generalization across diverse data types (images, audio, graphs, point clouds) and downstream tasks, employing techniques like contrastive learning, masked autoencoders, and generative models within various architectures such as transformers and convolutional neural networks. These advancements are significant because they reduce the reliance on expensive and time-consuming data labeling, enabling the development of robust models for applications ranging from medical image analysis and speech recognition to geospatial AI and protein function prediction. The efficiency gains from SSL are also a key focus, with research exploring optimal model and data sizes for given computational budgets.
Papers
SCDNet: Self-supervised Learning Feature-based Speaker Change Detection
Yue Li, Xinsheng Wang, Li Zhang, Lei Xie
A deep cut into Split Federated Self-supervised Learning
Marcin Przewięźlikowski, Marcin Osial, Bartosz Zieliński, Marek Śmieja
SimSAM: Simple Siamese Representations Based Semantic Affinity Matrix for Unsupervised Image Segmentation
Chanda Grover Kamra, Indra Deep Mastan, Nitin Kumar, Debayan Gupta
Exploring Self-Supervised Multi-view Contrastive Learning for Speech Emotion Recognition with Limited Annotations
Bulat Khaertdinov, Pedro Jeuris, Annanda Sousa, Enrique Hortal
GenDistiller: Distilling Pre-trained Language Models based on an Autoregressive Generative Model
Yingying Gao, Shilei Zhang, Chao Deng, Junlan Feng
Operational Latent Spaces
Scott H. Hawley, Austin R. Tackett
Learning to Edit Visual Programs with Self-Supervision
R. Kenny Jones, Renhao Zhang, Aditya Ganeshan, Daniel Ritchie
Using Self-supervised Learning Can Improve Model Fairness
Sofia Yfantidou, Dimitris Spathis, Marios Constantinides, Athena Vakali, Daniele Quercia, Fahim Kawsar
Towards Supervised Performance on Speaker Verification with Self-Supervised Learning by Leveraging Large-Scale ASR Models
Victor Miara, Theo Lepage, Reda Dehak
Strengthening Network Intrusion Detection in IoT Environments with Self-Supervised Learning and Few Shot Learning
Safa Ben Atitallah, Maha Driss, Wadii Boulila, Anis Koubaa