Self Supervised Learning
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by formulating pretext tasks that encourage the model to learn useful representations. Current research focuses on improving SSL's performance and generalization across diverse data types (images, audio, graphs, point clouds) and downstream tasks, employing techniques like contrastive learning, masked autoencoders, and generative models within various architectures such as transformers and convolutional neural networks. These advancements are significant because they reduce the reliance on expensive and time-consuming data labeling, enabling the development of robust models for applications ranging from medical image analysis and speech recognition to geospatial AI and protein function prediction. The efficiency gains from SSL are also a key focus, with research exploring optimal model and data sizes for given computational budgets.
Papers
EarthView: A Large Scale Remote Sensing Dataset for Self-Supervision
Diego Velazquez, Pau Rodriguez López, Sergio Alonso, Josep M. Gonfaus, Jordi Gonzalez, Gerardo Richarte, Javier Marin, Yoshua Bengio, Alexandre Lacoste
Optimizing Speech Multi-View Feature Fusion through Conditional Computation
Weiqiao Shan, Yuhao Zhang, Yuchen Han, Bei Li, Xiaofeng Zhao, Yuang Li, Min Zhang, Hao Yang, Tong Xiao, Jingbo Zhu
Towards a Generalizable Speech Marker for Parkinson's Disease Diagnosis
Maksim Siniukov, Ellie Xing, Sanaz, Attaripour Isfahani, Mohammad Soleymani
An Empirical Study of Accuracy-Robustness Tradeoff and Training Efficiency in Self-Supervised Learning
Fatemeh Ghofrani, Pooyan Jamshidi
Radar Signal Recognition through Self-Supervised Learning and Domain Adaptation
Zi Huang, Akila Pemasiri, Simon Denman, Clinton Fookes, Terrence Martin
PyG-SSL: A Graph Self-Supervised Learning Toolkit
Lecheng Zheng, Baoyu Jing, Zihao Li, Zhichen Zeng, Tianxin Wei, Mengting Ai, Xinrui He, Lihui Liu, Dongqi Fu, Jiaxuan You, Hanghang Tong, Jingrui He
Metadata-Enhanced Speech Emotion Recognition: Augmented Residual Integration and Co-Attention in Two-Stage Fine-Tuning
Zixiang Wan, Ziyue Qiu, Yiyang Liu, Wei-Qiang Zhang
Where Did Your Model Learn That? Label-free Influence for Self-supervised Learning
Nidhin Harilal, Amit Kiran Rege, Reza Akbarian Bafghi, Maziar Raissi, Claire Monteleoni
An OpenMind for 3D medical vision self-supervised learning
Tassilo Wald, Constantin Ulrich, Jonathan Suprijadi, Michal Nohel, Robin Peretzke, Klaus H. Maier-Hein
BloomCoreset: Fast Coreset Sampling using Bloom Filters for Fine-Grained Self-Supervised Learning
Prajwal Singh, Gautam Vashishtha, Indra Deep Mastan, Shanmuganathan Raman
AV-DTEC: Self-Supervised Audio-Visual Fusion for Drone Trajectory Estimation and Classification
Zhenyuan Xiao, Yizhuo Yang, Guili Xu, Xianglong Zeng, Shenghai Yuan