Representation Learning
Representation learning aims to create meaningful and efficient data representations that capture underlying structure and facilitate downstream tasks like classification, prediction, and control. Current research focuses on developing robust and generalizable representations, often employing techniques like contrastive learning, transformers, and mixture-of-experts models, addressing challenges such as disentanglement, handling noisy or sparse data, and improving efficiency in multi-task and continual learning scenarios. These advancements have significant implications for various fields, improving the performance and interpretability of machine learning models across diverse applications, from recommendation systems to medical image analysis and causal inference.
Papers
Perturbation-based Graph Active Learning for Weakly-Supervised Belief Representation Learning
Dachun Sun, Ruijie Wang, Jinning Li, Ruipeng Han, Xinyi Liu, You Lyu, Tarek Abdelzaher
Indication Finding: a novel use case for representation learning
Maren Eckhoff, Valmir Selimi, Alexander Aranovitch, Ian Lyons, Emily Briggs, Jennifer Hou, Alex Devereson, Matej Macak, David Champagne, Chris Anagnostopoulos
Breaking the Memory Barrier: Near Infinite Batch Size Scaling for Contrastive Loss
Zesen Cheng, Hang Zhang, Kehan Li, Sicong Leng, Zhiqiang Hu, Fei Wu, Deli Zhao, Xin Li, Lidong Bing
IdenBAT: Disentangled Representation Learning for Identity-Preserved Brain Age Transformation
Junyeong Maeng, Kwanseok Oh, Wonsik Jung, Heung-Il Suk
Sliding Puzzles Gym: A Scalable Benchmark for State Representation in Visual Reinforcement Learning
Bryan L. M. de Oliveira, Murilo L. da Luz, Bruno Brandão, Luana G. B. Martins, Telma W. de L. Soares, Luckeciano C. Melo
Normalizing self-supervised learning for provably reliable Change Point Detection
Alexandra Bazarova, Evgenia Romanenkova, Alexey Zaytsev
Representation Learning of Structured Data for Medical Foundation Models
Vijay Prakash Dwivedi, Viktor Schlegel, Andy T. Liu, Thanh-Tung Nguyen, Abhinav Ramesh Kashyap, Jeng Wei, Wei-Hsian Yin, Stefan Winkler, Robby T. Tan
EH-MAM: Easy-to-Hard Masked Acoustic Modeling for Self-Supervised Speech Representation Learning
Ashish Seth, Ramaneswaran Selvakumar, S Sakshi, Sonal Kumar, Sreyan Ghosh, Dinesh Manocha
Learning Efficient Representations of Neutrino Telescope Events
Felix J. Yu, Nicholas Kamp, Carlos A. Argüelles
Learning Representations for Reasoning: Generalizing Across Diverse Structures
Zhaocheng Zhu
Explanation-Preserving Augmentation for Semi-Supervised Graph Representation Learning
Zhuomin Chen, Jingchao Ni, Hojat Allah Salehi, Xu Zheng, Esteban Schafir, Farhad Shirani, Dongsheng Luo
Self-Supervised Learning of Disentangled Representations for Multivariate Time-Series
Ching Chang, Chiao-Tung Chan, Wei-Yao Wang, Wen-Chih Peng, Tien-Fu Chen