Representation Learning
Representation learning aims to create meaningful and efficient data representations that capture underlying structure and facilitate downstream tasks like classification, prediction, and control. Current research focuses on developing robust and generalizable representations, often employing techniques like contrastive learning, transformers, and mixture-of-experts models, addressing challenges such as disentanglement, handling noisy or sparse data, and improving efficiency in multi-task and continual learning scenarios. These advancements have significant implications for various fields, improving the performance and interpretability of machine learning models across diverse applications, from recommendation systems to medical image analysis and causal inference.
Papers
Enhance Hyperbolic Representation Learning via Second-order Pooling
Kun Song, Ruben Solozabal, Li hao, Lu Ren, Moloud Abdar, Qing Li, Fakhri Karray, Martin Takac
SimSiam Naming Game: A Unified Approach for Representation Learning and Emergent Communication
Nguyen Le Hoang, Tadahiro Taniguchi, Fang Tianwei, Akira Taniguchi
Representational learning for an anomalous sound detection system with source separation model
Seunghyeon Shin, Seokjin Lee
Uncertainty Quantification via Hölder Divergence for Multi-View Representation Learning
an Zhang, Ming Li, Chun Li, Zhaoxia Liu, Ye Zhang, Fei Richard Yu
Perturbation-based Graph Active Learning for Weakly-Supervised Belief Representation Learning
Dachun Sun, Ruijie Wang, Jinning Li, Ruipeng Han, Xinyi Liu, You Lyu, Tarek Abdelzaher
Indication Finding: a novel use case for representation learning
Maren Eckhoff, Valmir Selimi, Alexander Aranovitch, Ian Lyons, Emily Briggs, Jennifer Hou, Alex Devereson, Matej Macak, David Champagne, Chris Anagnostopoulos
Breaking the Memory Barrier: Near Infinite Batch Size Scaling for Contrastive Loss
Zesen Cheng, Hang Zhang, Kehan Li, Sicong Leng, Zhiqiang Hu, Fei Wu, Deli Zhao, Xin Li, Lidong Bing
IdenBAT: Disentangled Representation Learning for Identity-Preserved Brain Age Transformation
Junyeong Maeng, Kwanseok Oh, Wonsik Jung, Heung-Il Suk
Learning from Neighbors: Category Extrapolation for Long-Tail Learning
Shizhen Zhao, Xin Wen, Jiahui Liu, Chuofan Ma, Chunfeng Yuan, Xiaojuan Qi
MI-VisionShot: Few-shot adaptation of vision-language models for slide-level classification of histopathological images
Pablo Meseguer, Rocío del Amor, Valery Naranjo
Sliding Puzzles Gym: A Scalable Benchmark for State Representation in Visual Reinforcement Learning
Bryan L. M. de Oliveira, Murilo L. da Luz, Bruno Brandão, Luana G. B. Martins, Telma W. de L. Soares, Luckeciano C. Melo
Normalizing self-supervised learning for provably reliable Change Point Detection
Alexandra Bazarova, Evgenia Romanenkova, Alexey Zaytsev
Representation Learning of Structured Data for Medical Foundation Models
Vijay Prakash Dwivedi, Viktor Schlegel, Andy T. Liu, Thanh-Tung Nguyen, Abhinav Ramesh Kashyap, Jeng Wei, Wei-Hsian Yin, Stefan Winkler, Robby T. Tan
EH-MAM: Easy-to-Hard Masked Acoustic Modeling for Self-Supervised Speech Representation Learning
Ashish Seth, Ramaneswaran Selvakumar, S Sakshi, Sonal Kumar, Sreyan Ghosh, Dinesh Manocha