Representation Learning
Representation learning aims to create meaningful and efficient data representations that capture underlying structure and facilitate downstream tasks like classification, prediction, and control. Current research focuses on developing robust and generalizable representations, often employing techniques like contrastive learning, transformers, and mixture-of-experts models, addressing challenges such as disentanglement, handling noisy or sparse data, and improving efficiency in multi-task and continual learning scenarios. These advancements have significant implications for various fields, improving the performance and interpretability of machine learning models across diverse applications, from recommendation systems to medical image analysis and causal inference.
Papers
Self-Supervised Representation Learning With MUlti-Segmental Informational Coding (MUSIC)
Chuang Niu, Ge Wang
Compositional Mixture Representations for Vision and Text
Stephan Alaniz, Marco Federici, Zeynep Akata
Local Distance Preserving Auto-encoders using Continuous k-Nearest Neighbours Graphs
Nutan Chen, Patrick van der Smagt, Botond Cseke
SupMAE: Supervised Masked Autoencoders Are Efficient Vision Learners
Feng Liang, Yangguang Li, Diana Marculescu
Improving VAE-based Representation Learning
Mingtian Zhang, Tim Z. Xiao, Brooks Paige, David Barber
Group-wise Reinforcement Feature Generation for Optimal and Explainable Representation Space Reconstruction
Dongjie Wang, Yanjie Fu, Kunpeng Liu, Xiaolin Li, Yan Solihin
Phased Progressive Learning with Coupling-Regulation-Imbalance Loss for Imbalanced Data Classification
Liang Xu, Yi Cheng, Fan Zhang, Bingxuan Wu, Pengfei Shao, Peng Liu, Shuwei Shen, Peng Yao, Ronald X. Xu
Multi-Augmentation for Efficient Visual Representation Learning for Self-supervised Pre-training
Van-Nhiem Tran, Chi-En Huang, Shen-Hsuan Liu, Kai-Lin Yang, Timothy Ko, Yung-Hui Li