Self Supervision
Self-supervised learning (SSL) aims to train machine learning models using unlabeled data by leveraging inherent data structures or relationships as supervisory signals, reducing reliance on expensive and time-consuming manual annotation. Current research focuses on developing novel SSL methods for diverse data types, including tabular data, images, videos, and point clouds, often employing architectures like Joint Embedding Predictive Architectures (JEPAs) and Vision Transformers (ViTs), and incorporating techniques such as contrastive learning and data augmentation (where applicable). The widespread adoption of SSL promises to significantly improve the efficiency and scalability of machine learning across various fields, from medical image analysis and robotics to natural language processing and remote sensing.
Papers
Detect, Augment, Compose, and Adapt: Four Steps for Unsupervised Domain Adaptation in Object Detection
Mohamed L. Mekhalfi, Davide Boscaini, Fabio Poiesi
Towards quantitative precision for ECG analysis: Leveraging state space models, self-supervision and patient metadata
Temesgen Mehari, Nils Strodthoff
GEO-Bench: Toward Foundation Models for Earth Monitoring
Alexandre Lacoste, Nils Lehmann, Pau Rodriguez, Evan David Sherwin, Hannah Kerner, Björn Lütjens, Jeremy Andrew Irvin, David Dao, Hamed Alemohammad, Alexandre Drouin, Mehmet Gunturkun, Gabriel Huang, David Vazquez, Dava Newman, Yoshua Bengio, Stefano Ermon, Xiao Xiang Zhu
Bridging the Gap Between Multi-Step and One-Shot Trajectory Prediction via Self-Supervision
Faris Janjoš, Max Keller, Maxim Dolgov, J. Marius Zöllner