Self Supervised Learning
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by formulating pretext tasks that encourage the model to learn useful representations. Current research focuses on improving SSL's performance and generalization across diverse data types (images, audio, graphs, point clouds) and downstream tasks, employing techniques like contrastive learning, masked autoencoders, and generative models within various architectures such as transformers and convolutional neural networks. These advancements are significant because they reduce the reliance on expensive and time-consuming data labeling, enabling the development of robust models for applications ranging from medical image analysis and speech recognition to geospatial AI and protein function prediction. The efficiency gains from SSL are also a key focus, with research exploring optimal model and data sizes for given computational budgets.
Papers
Scalable Graph Self-Supervised Learning
Ali Saheb Pasand, Reza Moravej, Mahdi Biparva, Raika Karimi, Ali Ghodsi
WERank: Towards Rank Degradation Prevention for Self-Supervised Learning Using Weight Regularization
Ali Saheb Pasand, Reza Moravej, Mahdi Biparva, Ali Ghodsi
GraSSRep: Graph-Based Self-Supervised Learning for Repeat Detection in Metagenomic Assembly
Ali Azizpour, Advait Balaji, Todd J. Treangen, Santiago Segarra
SLYKLatent: A Learning Framework for Gaze Estimation Using Deep Facial Feature Learning
Samuel Adebayo, Joost C. Dessing, Seán McLoone
A Probabilistic Model Behind Self-Supervised Learning
Alice Bizeul, Bernhard Schölkopf, Carl Allen
Guiding Masked Representation Learning to Capture Spatio-Temporal Relationship of Electrocardiogram
Yeongyeon Na, Minje Park, Yunwon Tae, Sunghoon Joo
On the Transferability of Large-Scale Self-Supervision to Few-Shot Audio Classification
Calum Heggan, Sam Budgett, Timothy Hospedales, Mehrdad Yaghoobi
A Survey on Self-Supervised Learning for Non-Sequential Tabular Data
Wei-Yao Wang, Wei-Wei Du, Derek Xu, Wei Wang, Wen-Chih Peng
Enhanced Urban Region Profiling with Adversarial Self-Supervised Learning
Weiliang Chan, Qianqian Ren, Jinbao Li
Self-Supervised Contrastive Pre-Training for Multivariate Point Processes
Xiao Shou, Dharmashankar Subramanian, Debarun Bhattacharjya, Tian Gao, Kristin P. Bennet
Self-supervised learning of video representations from a child's perspective
A. Emin Orhan, Wentao Wang, Alex N. Wang, Mengye Ren, Brenden M. Lake
MLEM: Generative and Contrastive Learning as Distinct Modalities for Event Sequences
Viktor Moskvoretskii, Dmitry Osin, Egor Shvetsov, Igor Udovichenko, Maxim Zhelnin, Andrey Dukhovny, Anna Zhimerikina, Evgeny Burnaev
Hybrid Transformer and Spatial-Temporal Self-Supervised Learning for Long-term Traffic Prediction
Wang Zhu, Doudou Zhang, Baichao Long, Jianli Xiao