Self Supervised
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by designing pretext tasks that encourage the model to learn useful representations. Current research focuses on improving generalization, mitigating overfitting, and developing efficient architectures like transformers and CNNs for various modalities (images, audio, point clouds, fMRI data). SSL's significance lies in its ability to leverage vast amounts of readily available unlabeled data, leading to improved performance on downstream tasks and reducing the reliance on expensive and time-consuming manual labeling, particularly impacting fields like medical imaging, speech processing, and autonomous driving.
Papers
Non-Contrastive Learning Meets Language-Image Pre-Training
Jinghao Zhou, Li Dong, Zhe Gan, Lijuan Wang, Furu Wei
Self-Supervised Learning Through Efference Copies
Franz Scherr, Qinghai Guo, Timoleon Moraitis
Histopathological Image Classification based on Self-Supervised Vision Transformer and Weak Labels
Ahmet Gokberk Gul, Oezdemir Cetin, Christoph Reich, Tim Prangemeier, Nadine Flinner, Heinz Koeppl
Improving generalizability of distilled self-supervised speech processing models under distorted settings
Kuan-Po Huang, Yu-Kuan Fu, Tsu-Yuan Hsu, Fabian Ritter Gutierrez, Fan-Lin Wang, Liang-Hsuan Tseng, Yu Zhang, Hung-yi Lee
Self-Supervised 2D/3D Registration for X-Ray to CT Image Fusion
Srikrishna Jaganathan, Maximilian Kukla, Jian Wang, Karthik Shetty, Andreas Maier
Exploration of A Self-Supervised Speech Model: A Study on Emotional Corpora
Yuanchao Li, Yumnah Mohamied, Peter Bell, Catherine Lai
RankMe: Assessing the downstream performance of pretrained self-supervised representations by their rank
Quentin Garrido, Randall Balestriero, Laurent Najman, Yann Lecun
Automated Graph Self-supervised Learning via Multi-teacher Knowledge Distillation
Lirong Wu, Yufei Huang, Haitao Lin, Zicheng Liu, Tianyu Fan, Stan Z. Li
Multi-task Self-supervised Graph Neural Networks Enable Stronger Task Generalization
Mingxuan Ju, Tong Zhao, Qianlong Wen, Wenhao Yu, Neil Shah, Yanfang Ye, Chuxu Zhang