Self Supervised Learning
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by formulating pretext tasks that encourage the model to learn useful representations. Current research focuses on improving SSL's performance and generalization across diverse data types (images, audio, graphs, point clouds) and downstream tasks, employing techniques like contrastive learning, masked autoencoders, and generative models within various architectures such as transformers and convolutional neural networks. These advancements are significant because they reduce the reliance on expensive and time-consuming data labeling, enabling the development of robust models for applications ranging from medical image analysis and speech recognition to geospatial AI and protein function prediction. The efficiency gains from SSL are also a key focus, with research exploring optimal model and data sizes for given computational budgets.
Papers
OPERA: Omni-Supervised Representation Learning with Hierarchical Supervisions
Chengkun Wang, Wenzhao Zheng, Zheng Zhu, Jie Zhou, Jiwen Lu
CSS: Combining Self-training and Self-supervised Learning for Few-shot Dialogue State Tracking
Haoning Zhang, Junwei Bao, Haipeng Sun, Huaishao Luo, Wenye Li, Shuguang Cui
Label-free segmentation from cardiac ultrasound using self-supervised learning
Danielle L. Ferreira, Zaynaf Salaymang, Rima Arnaout
Knowledge Prompts: Injecting World Knowledge into Language Models through Soft Prompts
Cicero Nogueira dos Santos, Zhe Dong, Daniel Cer, John Nham, Siamak Shakeri, Jianmo Ni, Yun-hsuan Sung
Exploiting map information for self-supervised learning in motion forecasting
Caio Azevedo, Thomas Gilles, Stefano Sabatini, Dzmitry Tsishkou
Exploring Efficient-tuning Methods in Self-supervised Speech Models
Zih-Ching Chen, Chin-Lun Fu, Chih-Ying Liu, Shang-Wen Li, Hung-yi Lee
Self-supervised Learning for Label-Efficient Sleep Stage Classification: A Comprehensive Evaluation
Emadeldeen Eldele, Mohamed Ragab, Zhenghua Chen, Min Wu, Chee-Keong Kwoh, Xiaoli Li
Brief Introduction to Contrastive Learning Pretext Tasks for Visual Representation
Zhenyuan Lu
SimPer: Simple Self-Supervised Learning of Periodic Targets
Yuzhe Yang, Xin Liu, Jiang Wu, Silviu Borac, Dina Katabi, Ming-Zher Poh, Daniel McDuff
Effective Self-supervised Pre-training on Low-compute Networks without Distillation
Fuwen Tan, Fatemeh Saleh, Brais Martinez
CCC-wav2vec 2.0: Clustering aided Cross Contrastive Self-supervised learning of speech representations
Vasista Sai Lodagala, Sreyan Ghosh, S. Umesh
Fitting a Directional Microstructure Model to Diffusion-Relaxation MRI Data with Self-Supervised Machine Learning
Jason P. Lim, Stefano B. Blumberg, Neil Narayan, Sean C. Epstein, Daniel C. Alexander, Marco Palombo, Paddy J. Slator
Automated Graph Self-supervised Learning via Multi-teacher Knowledge Distillation
Lirong Wu, Yufei Huang, Haitao Lin, Zicheng Liu, Tianyu Fan, Stan Z. Li
Exploring The Role of Mean Teachers in Self-supervised Masked Auto-Encoders
Youngwan Lee, Jeffrey Willette, Jonghee Kim, Juho Lee, Sung Ju Hwang