Self Supervised
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by designing pretext tasks that encourage the model to learn useful representations. Current research focuses on improving generalization, mitigating overfitting, and developing efficient architectures like transformers and CNNs for various modalities (images, audio, point clouds, fMRI data). SSL's significance lies in its ability to leverage vast amounts of readily available unlabeled data, leading to improved performance on downstream tasks and reducing the reliance on expensive and time-consuming manual labeling, particularly impacting fields like medical imaging, speech processing, and autonomous driving.
Papers
Diversified Ensemble of Independent Sub-Networks for Robust Self-Supervised Representation Learning
Amirhossein Vahidi, Lisa Wimmer, Hüseyin Anil Gündüz, Bernd Bischl, Eyke Hüllermeier, Mina Rezaei
MS-Net: A Multi-modal Self-supervised Network for Fine-Grained Classification of Aircraft in SAR Images
Bingying Yue, Jianhao Li, Hao Shi, Yupei Wang, Honghu Zhong
Speech Self-Supervised Representations Benchmarking: a Case for Larger Probing Heads
Salah Zaiem, Youcef Kemiche, Titouan Parcollet, Slim Essid, Mirco Ravanelli
Self-Supervision for Tackling Unsupervised Anomaly Detection: Pitfalls and Opportunities
Leman Akoglu, Jaemin Yoo
MOFO: MOtion FOcused Self-Supervision for Video Understanding
Mona Ahmadian, Frank Guerin, Andrew Gilbert
Self-Supervised Learning for Endoscopic Video Analysis
Roy Hirsch, Mathilde Caron, Regev Cohen, Amir Livne, Ron Shapiro, Tomer Golany, Roman Goldenberg, Daniel Freedman, Ehud Rivlin
Self-Supervised Knowledge-Driven Deep Learning for 3D Magnetic Inversion
Yinshuo Li, Zhuo Jia, Wenkai Lu, Cao Song
Multi-Task Hypergraphs for Semi-supervised Learning using Earth Observations
Mihai Pirvu, Alina Marcu, Alexandra Dobrescu, Nabil Belbachir, Marius Leordeanu
LightDepth: Single-View Depth Self-Supervision from Illumination Decline
Javier Rodríguez-Puigvert, Víctor M. Batlle, J. M. M. Montiel, Ruben Martinez-Cantin, Pascal Fua, Juan D. Tardós, Javier Civera
Information Theory-Guided Heuristic Progressive Multi-View Coding
Jiangmeng Li, Hang Gao, Wenwen Qiang, Changwen Zheng
Implicit Self-supervised Language Representation for Spoken Language Diarization
Jagabandhu Mishra, S. R. Mahadeva Prasanna
GiGaMAE: Generalizable Graph Masked Autoencoder via Collaborative Latent Space Reconstruction
Yucheng Shi, Yushun Dong, Qiaoyu Tan, Jundong Li, Ninghao Liu
Point Contrastive Prediction with Semantic Clustering for Self-Supervised Learning on Point Cloud Videos
Xiaoxiao Sheng, Zhiqiang Shen, Gang Xiao, Longguang Wang, Yulan Guo, Hehe Fan