Representation Learning
Representation learning aims to create meaningful and efficient data representations that capture underlying structure and facilitate downstream tasks like classification, prediction, and control. Current research focuses on developing robust and generalizable representations, often employing techniques like contrastive learning, transformers, and mixture-of-experts models, addressing challenges such as disentanglement, handling noisy or sparse data, and improving efficiency in multi-task and continual learning scenarios. These advancements have significant implications for various fields, improving the performance and interpretability of machine learning models across diverse applications, from recommendation systems to medical image analysis and causal inference.
Papers
On the Informativeness of Supervision Signals
Ilia Sucholutsky, Ruairidh M. Battleday, Katherine M. Collins, Raja Marjieh, Joshua C. Peterson, Pulkit Singh, Umang Bhatt, Nori Jacoby, Adrian Weller, Thomas L. Griffiths
Deep Multimodal Fusion for Generalizable Person Re-identification
Suncheng Xiang, Hao Chen, Wei Ran, Zefang Yu, Ting Liu, Dahong Qian, Yuzhuo Fu
Adversarial Auto-Augment with Label Preservation: A Representation Learning Principle Guided Approach
Kaiwen Yang, Yanchao Sun, Jiahao Su, Fengxiang He, Xinmei Tian, Furong Huang, Tianyi Zhou, Dacheng Tao
A robust estimator of mutual information for deep learning interpretability
Davide Piras, Hiranya V. Peiris, Andrew Pontzen, Luisa Lucie-Smith, Ningyuan Guo, Brian Nord
A picture of the space of typical learnable tasks
Rahul Ramesh, Jialin Mao, Itay Griniasty, Rubing Yang, Han Kheng Teoh, Mark Transtrum, James P. Sethna, Pratik Chaudhari
Representation Learning for General-sum Low-rank Markov Games
Chengzhuo Ni, Yuda Song, Xuezhou Zhang, Chi Jin, Mengdi Wang
FELRec: Efficient Handling of Item Cold-Start With Dynamic Representation Learning in Recommender Systems
Kuba Weimann, Tim O. F. Conrad
DyG2Vec: Efficient Representation Learning for Dynamic Graphs
Mohammad Ali Alomrani, Mahdi Biparva, Yingxue Zhang, Mark Coates
Disentangled Text Representation Learning with Information-Theoretic Perspective for Adversarial Robustness
Jiahao Zhao, Wenji Mao
Masked Modeling Duo: Learning Representations by Encouraging Both Networks to Model the Input
Daisuke Niizumi, Daiki Takeuchi, Yasunori Ohishi, Noboru Harada, Kunio Kashino
Palm up: Playing in the Latent Manifold for Unsupervised Pretraining
Hao Liu, Tom Zahavy, Volodymyr Mnih, Satinder Singh
UniNL: Aligning Representation Learning with Scoring Function for OOD Detection via Unified Neighborhood Learning
Yutao Mou, Pei Wang, Keqing He, Yanan Wu, Jingang Wang, Wei Wu, Weiran Xu
DyTed: Disentangled Representation Learning for Discrete-time Dynamic Graph
Kaike Zhang, Qi Cao, Gaolin Fang, Bingbing Xu, Hongjian Zou, Huawei Shen, Xueqi Cheng