Unsupervised Representation Learning
Unsupervised representation learning aims to extract meaningful features from unlabeled data, enabling efficient learning without the need for extensive human annotation. Current research focuses on developing robust algorithms, such as contrastive learning and diffusion models, often incorporating techniques like self-attention, transformers, and generative models to improve representation quality and address challenges like feature collapse. This field is crucial for advancing various applications, including autonomous driving, medical image analysis, and time series forecasting, by enabling efficient learning from large, unlabeled datasets in diverse domains. The development of more effective and interpretable unsupervised methods holds significant potential to accelerate progress across numerous scientific and practical fields.
Papers
A Causal Ordering Prior for Unsupervised Representation Learning
Avinash Kori, Pedro Sanchez, Konstantinos Vilouras, Ben Glocker, Sotirios A. Tsaftaris
Self-Supervised Learning with Lie Symmetries for Partial Differential Equations
Grégoire Mialon, Quentin Garrido, Hannah Lawrence, Danyal Rehman, Yann LeCun, Bobak T. Kiani