Self Supervised
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by designing pretext tasks that encourage the model to learn useful representations. Current research focuses on improving generalization, mitigating overfitting, and developing efficient architectures like transformers and CNNs for various modalities (images, audio, point clouds, fMRI data). SSL's significance lies in its ability to leverage vast amounts of readily available unlabeled data, leading to improved performance on downstream tasks and reducing the reliance on expensive and time-consuming manual labeling, particularly impacting fields like medical imaging, speech processing, and autonomous driving.
Papers
CARL-G: Clustering-Accelerated Representation Learning on Graphs
William Shiao, Uday Singh Saini, Yozen Liu, Tong Zhao, Neil Shah, Evangelos E. Papalexakis
Active Learning Guided Fine-Tuning for enhancing Self-Supervised Based Multi-Label Classification of Remote Sensing Images
Lars Möllenbrok, Begüm Demir
FLSL: Feature-level Self-supervised Learning
Qing Su, Anton Netchaev, Hai Li, Shihao Ji
A Large-Scale Analysis on Self-Supervised Video Representation Learning
Akash Kumar, Ashlesha Kumar, Vibhav Vineet, Yogesh Singh Rawat
Liquidity takers behavior representation through a contrastive learning approach
Ruihua Ruan, Emmanuel Bacry, Jean-François Muzy
Context-Aware Self-Supervised Learning of Whole Slide Images
Milan Aryal, Nasim Yahyasoltani
Self-supervised Audio Teacher-Student Transformer for Both Clip-level and Frame-level Tasks
Xian Li, Nian Shao, Xiaofei Li
NTKCPL: Active Learning on Top of Self-Supervised Model by Estimating True Coverage
Ziting Wen, Oscar Pizarro, Stefan Williams
Simultaneous or Sequential Training? How Speech Representations Cooperate in a Multi-Task Self-Supervised Learning System
Khazar Khorrami, María Andrea Cruz Blandón, Tuomas Virtanen, Okko Räsänen
N-Shot Benchmarking of Whisper on Diverse Arabic Speech Recognition
Bashar Talafha, Abdul Waheed, Muhammad Abdul-Mageed
Understanding Augmentation-based Self-Supervised Representation Learning via RKHS Approximation and Regression
Runtian Zhai, Bingbin Liu, Andrej Risteski, Zico Kolter, Pradeep Ravikumar
Speech Self-Supervised Representation Benchmarking: Are We Doing it Right?
Salah Zaiem, Youcef Kemiche, Titouan Parcollet, Slim Essid, Mirco Ravanelli