Self Supervised Learning
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by formulating pretext tasks that encourage the model to learn useful representations. Current research focuses on improving SSL's performance and generalization across diverse data types (images, audio, graphs, point clouds) and downstream tasks, employing techniques like contrastive learning, masked autoencoders, and generative models within various architectures such as transformers and convolutional neural networks. These advancements are significant because they reduce the reliance on expensive and time-consuming data labeling, enabling the development of robust models for applications ranging from medical image analysis and speech recognition to geospatial AI and protein function prediction. The efficiency gains from SSL are also a key focus, with research exploring optimal model and data sizes for given computational budgets.
Papers
Kick Back & Relax++: Scaling Beyond Ground-Truth Depth with SlowTV & CribsTV
Jaime Spencer, Chris Russell, Simon Hadfield, Richard Bowden
Self-Supervised Representation Learning with Meta Comprehensive Regularization
Huijie Guo, Ying Ba, Jie Hu, Lingyu Si, Wenwen Qiang, Lei Shi
Applying Self-supervised Learning to Network Intrusion Detection for Network Flows with Graph Neural Network
Renjie Xu, Guangwei Wu, Weiping Wang, Xing Gao, An He, Zhengpeng Zhang
The Common Stability Mechanism behind most Self-Supervised Learning Approaches
Abhishek Jha, Matthew B. Blaschko, Yuki M. Asano, Tinne Tuytelaars
Self-Guided Masked Autoencoders for Domain-Agnostic Self-Supervised Learning
Johnathan Xie, Yoonho Lee, Annie S. Chen, Chelsea Finn
Zero-Shot Pediatric Tuberculosis Detection in Chest X-Rays using Self-Supervised Learning
Daniel Capellán-Martín, Abhijeet Parida, Juan J. Gómez-Valverde, Ramon Sanchez-Jacob, Pooneh Roshanitabrizi, Marius G. Linguraru, María J. Ledesma-Carbayo, Syed M. Anwar
Multi-organ Self-supervised Contrastive Learning for Breast Lesion Segmentation
Hugo Figueiras, Helena Aidos, Nuno Cruz Garcia
Contextual Molecule Representation Learning from Chemical Reaction Knowledge
Han Tang, Shikun Feng, Bicheng Lin, Yuyan Ni, JIngjing Liu, Wei-Ying Ma, Yanyan Lan
Unsupervised learning based object detection using Contrastive Learning
Chandan Kumar, Jansel Herrera-Gerena, John Just, Matthew Darr, Ali Jannesari