Representation Learning
Representation learning aims to create meaningful and efficient data representations that capture underlying structure and facilitate downstream tasks like classification, prediction, and control. Current research focuses on developing robust and generalizable representations, often employing techniques like contrastive learning, transformers, and mixture-of-experts models, addressing challenges such as disentanglement, handling noisy or sparse data, and improving efficiency in multi-task and continual learning scenarios. These advancements have significant implications for various fields, improving the performance and interpretability of machine learning models across diverse applications, from recommendation systems to medical image analysis and causal inference.
Papers
Entity Alignment with Unlabeled Dangling Cases
Hang Yin, Dong Ding, Liyao Xiang, Yuheng He, Yihan Wu, Xinbing Wang, Chenghu Zhou
Probabilistic World Modeling with Asymmetric Distance Measure
Meng Song
Time Series Representation Learning with Supervised Contrastive Temporal Transformer
Yuansan Liu, Sudanthi Wijewickrema, Christofer Bester, Stephen O'Leary, James Bailey
HIMap: HybrId Representation Learning for End-to-end Vectorized HD Map Construction
Yi Zhou, Hui Zhang, Jiaqian Yu, Yifan Yang, Sangil Jung, Seung-In Park, ByungIn Yoo
Link Prediction for Social Networks using Representation Learning and Heuristic-based Features
Samarth Khanna, Sree Bhattacharyya, Sudipto Ghosh, Kushagra Agarwal, Asit Kumar Das
Unity by Diversity: Improved Representation Learning in Multimodal VAEs
Thomas M. Sutter, Yang Meng, Andrea Agostini, Daphné Chopard, Norbert Fortin, Julia E. Vogt, Bahbak Shahbaba, Stephan Mandt
Synthetic Privileged Information Enhances Medical Image Representation Learning
Lucas Farndale, Chris Walsh, Robert Insall, Ke Yuan