Representation Learning
Representation learning aims to create meaningful and efficient data representations that capture underlying structure and facilitate downstream tasks like classification, prediction, and control. Current research focuses on developing robust and generalizable representations, often employing techniques like contrastive learning, transformers, and mixture-of-experts models, addressing challenges such as disentanglement, handling noisy or sparse data, and improving efficiency in multi-task and continual learning scenarios. These advancements have significant implications for various fields, improving the performance and interpretability of machine learning models across diverse applications, from recommendation systems to medical image analysis and causal inference.
Papers
ReL-SAR: Representation Learning for Skeleton Action Recognition with Convolutional Transformers and BYOL
Safwen Naimi, Wassim Bouachir, Guillaume-Alexandre Bilodeau
Adapted-MoE: Mixture of Experts with Test-Time Adaption for Anomaly Detection
Tianwu Lei, Silin Chen, Bohan Wang, Zhengkai Jiang, Ningmu Zou
MSLIQA: Enhancing Learning Representations for Image Quality Assessment through Multi-Scale Learning
Nasim Jamshidi Avanaki, Abhijay Ghildiyal, Nabajeet Barman, Saman Zadtootaghaj
DetectBERT: Towards Full App-Level Representation Learning to Detect Android Malware
Tiezhu Sun, Nadia Daoudi, Kisub Kim, Kevin Allix, Tegawendé F. Bissyandé, Jacques Klein
Supervised Representation Learning towards Generalizable Assembly State Recognition
Tim J. Schoonbeek, Goutham Balachandran, Hans Onvlee, Tim Houben, Shao-Hsuan Hung, Jacek Kustra, Peter H. N. de With, Fons van der Sommen
Representation Learning of Complex Assemblies, An Effort to Improve Corporate Scope 3 Emissions Calculation
Ajay Chatterjee, Srikanth Ranganathan