Representation Learning
Representation learning aims to create meaningful and efficient data representations that capture underlying structure and facilitate downstream tasks like classification, prediction, and control. Current research focuses on developing robust and generalizable representations, often employing techniques like contrastive learning, transformers, and mixture-of-experts models, addressing challenges such as disentanglement, handling noisy or sparse data, and improving efficiency in multi-task and continual learning scenarios. These advancements have significant implications for various fields, improving the performance and interpretability of machine learning models across diverse applications, from recommendation systems to medical image analysis and causal inference.
Papers
Flexible infinite-width graph convolutional networks and the importance of representation learning
Ben Anson, Edward Milsom, Laurence Aitchison
TEE4EHR: Transformer Event Encoder for Better Representation Learning in Electronic Health Records
Hojjat Karami, David Atienza, Anisoara Ionescu
Jointly Learning Representations for Map Entities via Heterogeneous Graph Contrastive Learning
Jiawei Jiang, Yifan Yang, Jingyuan Wang, Junjie Wu
Constrained Multiview Representation for Self-supervised Contrastive Learning
Siyuan Dai, Kai Ye, Kun Zhao, Ge Cui, Haoteng Tang, Liang Zhan
Minimum Description Length and Generalization Guarantees for Representation Learning
Milad Sefidgaran, Abdellatif Zaidi, Piotr Krasnowski
Discovering interpretable models of scientific image data with deep learning
Christopher J. Soelistyo, Alan R. Lowe
RecDCL: Dual Contrastive Learning for Recommendation
Dan Zhang, Yangliao Geng, Wenwen Gong, Zhongang Qi, Zhiyu Chen, Xing Tang, Ying Shan, Yuxiao Dong, Jie Tang
DGNN: Decoupled Graph Neural Networks with Structural Consistency between Attribute and Graph Embedding Representations
Jinlu Wang, Jipeng Guo, Yanfeng Sun, Junbin Gao, Shaofan Wang, Yachao Yang, Baocai Yin