Representation Learning
Representation learning aims to create meaningful and efficient data representations that capture underlying structure and facilitate downstream tasks like classification, prediction, and control. Current research focuses on developing robust and generalizable representations, often employing techniques like contrastive learning, transformers, and mixture-of-experts models, addressing challenges such as disentanglement, handling noisy or sparse data, and improving efficiency in multi-task and continual learning scenarios. These advancements have significant implications for various fields, improving the performance and interpretability of machine learning models across diverse applications, from recommendation systems to medical image analysis and causal inference.
Papers
MMGA: Multimodal Learning with Graph Alignment
Xuan Yang, Quanjin Tao, Xiao Feng, Donghong Cai, Xiang Ren, Yang Yang
Generalizing in the Real World with Representation Learning
Tegan Maharaj
Towards Efficient and Effective Self-Supervised Learning of Visual Representations
Sravanti Addepalli, Kaushal Bhogale, Priyam Dey, R. Venkatesh Babu
SHINE: SubHypergraph Inductive Neural nEtwork
Yuan Luo
The Hidden Uniform Cluster Prior in Self-Supervised Learning
Mahmoud Assran, Randall Balestriero, Quentin Duval, Florian Bordes, Ishan Misra, Piotr Bojanowski, Pascal Vincent, Michael Rabbat, Nicolas Ballas
A Brief Survey on Representation Learning based Graph Dimensionality Reduction Techniques
Akhil Pandey Akella
IMB-NAS: Neural Architecture Search for Imbalanced Datasets
Rahul Duggal, Shengyun Peng, Hao Zhou, Duen Horng Chau
Adversarial Robustness of Representation Learning for Knowledge Graphs
Peru Bhardwaj
Federated Training of Dual Encoding Models on Small Non-IID Client Datasets
Raviteja Vemulapalli, Warren Richard Morningstar, Philip Andrew Mansfield, Hubert Eichner, Karan Singhal, Arash Afkanpour, Bradley Green