Self Supervised
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by designing pretext tasks that encourage the model to learn useful representations. Current research focuses on improving generalization, mitigating overfitting, and developing efficient architectures like transformers and CNNs for various modalities (images, audio, point clouds, fMRI data). SSL's significance lies in its ability to leverage vast amounts of readily available unlabeled data, leading to improved performance on downstream tasks and reducing the reliance on expensive and time-consuming manual labeling, particularly impacting fields like medical imaging, speech processing, and autonomous driving.
Papers
SSL-TTS: Leveraging Self-Supervised Embeddings and kNN Retrieval for Zero-Shot Multi-speaker TTS
Karl El Hajal, Ajinkya Kulkarni, Enno Hermann, Mathew Magimai. -Doss
Speech Representation Learning Revisited: The Necessity of Separate Learnable Parameters and Robust Data Augmentation
Hemant Yadav, Sunayana Sitaram, Rajiv Ratn Shah
A Review of Pseudo-Labeling for Computer Vision
Patrick Kage, Jay C. Rothenberger, Pavlos Andreadis, Dimitrios I. Diochnos
Temporal Variability and Multi-Viewed Self-Supervised Representations to Tackle the ASVspoof5 Deepfake Challenge
Yuankun Xie, Xiaopeng Wang, Zhiyong Wang, Ruibo Fu, Zhengqi Wen, Haonan Cheng, Long Ye
BSS-CFFMA: Cross-Domain Feature Fusion and Multi-Attention Speech Enhancement Network based on Self-Supervised Embedding
Alimjan Mattursun, Liejun Wang, Yinfeng Yu