Self Supervised Learning
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by formulating pretext tasks that encourage the model to learn useful representations. Current research focuses on improving SSL's performance and generalization across diverse data types (images, audio, graphs, point clouds) and downstream tasks, employing techniques like contrastive learning, masked autoencoders, and generative models within various architectures such as transformers and convolutional neural networks. These advancements are significant because they reduce the reliance on expensive and time-consuming data labeling, enabling the development of robust models for applications ranging from medical image analysis and speech recognition to geospatial AI and protein function prediction. The efficiency gains from SSL are also a key focus, with research exploring optimal model and data sizes for given computational budgets.
Papers
Contextually Affinitive Neighborhood Refinery for Deep Clustering
Chunlin Yu, Ye Shi, Jingya Wang
NearbyPatchCL: Leveraging Nearby Patches for Self-Supervised Patch-Level Multi-Class Classification in Whole-Slide Images
Gia-Bao Le, Van-Tien Nguyen, Trung-Nghia Le, Minh-Triet Tran
Learned representation-guided diffusion models for large-image generation
Alexandros Graikos, Srikar Yellapragada, Minh-Quan Le, Saarthak Kapse, Prateek Prasanna, Joel Saltz, Dimitris Samaras
Multimodal Pretraining of Medical Time Series and Notes
Ryan King, Tianbao Yang, Bobak Mortazavi
Self-supervised Machine Learning Based Approach to Orbit Modelling Applied to Space Traffic Management
Emma Stevenson, Victor Rodriguez-Fernandez, Hodei Urrutxua, Vincent Morand, David Camacho
Medical Vision Language Pretraining: A survey
Prashant Shrestha, Sanskar Amgain, Bidur Khanal, Cristian A. Linte, Binod Bhattarai
Disentangling the Effects of Data Augmentation and Format Transform in Self-Supervised Learning of Image Representations
Neha Kalibhat, Warren Morningstar, Alex Bijamov, Luyang Liu, Karan Singhal, Philip Mansfield
SASSL: Enhancing Self-Supervised Learning via Neural Style Transfer
Renan A. Rojas-Gomez, Karan Singhal, Ali Etemad, Alex Bijamov, Warren R. Morningstar, Philip Andrew Mansfield
Beyond Accuracy: Statistical Measures and Benchmark for Evaluation of Representation from Self-Supervised Learning
Jiantao Wu, Shentong Mo, Sara Atito, Josef Kittler, Zhenhua Feng, Muhammad Awais
Local Masking Meets Progressive Freezing: Crafting Efficient Vision Transformers for Self-Supervised Learning
Utku Mert Topcuoglu, Erdem Akagündüz