Representation Learning
Representation learning aims to create meaningful and efficient data representations that capture underlying structure and facilitate downstream tasks like classification, prediction, and control. Current research focuses on developing robust and generalizable representations, often employing techniques like contrastive learning, transformers, and mixture-of-experts models, addressing challenges such as disentanglement, handling noisy or sparse data, and improving efficiency in multi-task and continual learning scenarios. These advancements have significant implications for various fields, improving the performance and interpretability of machine learning models across diverse applications, from recommendation systems to medical image analysis and causal inference.
Papers
InfoGCN++: Learning Representation by Predicting the Future for Online Human Skeleton-based Action Recognition
Seunggeun Chi, Hyung-gun Chi, Qixing Huang, Karthik Ramani
Generalizing Medical Image Representations via Quaternion Wavelet Networks
Luigi Sigillo, Eleonora Grassucci, Aurelio Uncini, Danilo Comminiello
Transcending the Attention Paradigm: Representation Learning from Geospatial Social Media Data
Nick DiSanto, Anthony Corso, Benjamin Sanders, Gavin Harding
Provable Compositional Generalization for Object-Centric Learning
Thaddäus Wiedemer, Jack Brady, Alexander Panfilov, Attila Juhos, Matthias Bethge, Wieland Brendel