Representation Learning
Representation learning aims to create meaningful and efficient data representations that capture underlying structure and facilitate downstream tasks like classification, prediction, and control. Current research focuses on developing robust and generalizable representations, often employing techniques like contrastive learning, transformers, and mixture-of-experts models, addressing challenges such as disentanglement, handling noisy or sparse data, and improving efficiency in multi-task and continual learning scenarios. These advancements have significant implications for various fields, improving the performance and interpretability of machine learning models across diverse applications, from recommendation systems to medical image analysis and causal inference.
Papers
Robustness of Nonlinear Representation Learning
When the Future Becomes the Past: Taming Temporal Correspondence for Self-supervised Video Representation Learning
Continual Contrastive Learning on Tabular Data with Out of Distribution
Conjuring Positive Pairs for Efficient Unification of Representation Learning and Image Synthesis
Semi-KAN: KAN Provides an Effective Representation for Semi-Supervised Learning in Medical Image Segmentation