Self Supervision

Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by leveraging inherent data structures or relationships as supervisory signals, reducing reliance on expensive and time-consuming manual annotation. Current research focuses on developing novel SSL methods for diverse data types, including tabular data, images, videos, and point clouds, often employing architectures like Joint Embedding Predictive Architectures (JEPAs) and Vision Transformers (ViTs), and incorporating techniques such as contrastive learning and data augmentation (where applicable). The widespread adoption of SSL promises to significantly improve the efficiency and scalability of machine learning across various fields, from medical image analysis and robotics to natural language processing and remote sensing.

Papers