Self Supervised Objective
Self-supervised learning aims to train models using unlabeled data by defining objectives that encourage the model to learn useful representations without explicit human annotations. Current research focuses on developing novel self-supervised objectives tailored to specific tasks and data modalities, often leveraging contrastive learning, reconstruction, or clustering techniques within various architectures like transformers and graph neural networks. These advancements are significant because they enable training powerful models on massive datasets where labeled data is scarce or expensive, leading to improved performance in diverse applications such as recommendation systems, speech decoding, and biomedical image analysis.
Papers
Self-Supervised Multi-Object Tracking For Autonomous Driving From Consistency Across Timescales
Christopher Lang, Alexander Braun, Lars Schillingmann, Abhinav Valada
Objectives Matter: Understanding the Impact of Self-Supervised Objectives on Vision Transformer Representations
Shashank Shekhar, Florian Bordes, Pascal Vincent, Ari Morcos