Representation Learning
Representation learning aims to create meaningful and efficient data representations that capture underlying structure and facilitate downstream tasks like classification, prediction, and control. Current research focuses on developing robust and generalizable representations, often employing techniques like contrastive learning, transformers, and mixture-of-experts models, addressing challenges such as disentanglement, handling noisy or sparse data, and improving efficiency in multi-task and continual learning scenarios. These advancements have significant implications for various fields, improving the performance and interpretability of machine learning models across diverse applications, from recommendation systems to medical image analysis and causal inference.
Papers
RemoCap: Disentangled Representation Learning for Motion Capture
Hongsheng Wang, Lizao Zhang, Zhangnan Zhong, Shuolin Xu, Xinrui Zhou, Shengyu Zhang, Huahao Xu, Fei Wu, Feng Lin
Learning Structure and Knowledge Aware Representation with Large Language Models for Concept Recommendation
Qingyao Li, Wei Xia, Kounianhua Du, Qiji Zhang, Weinan Zhang, Ruiming Tang, Yong Yu
CTS: Concurrent Teacher-Student Reinforcement Learning for Legged Locomotion
Hongxi Wang, Haoxiang Luo, Wei Zhang, Hua Chen
Review of Deep Representation Learning Techniques for Brain-Computer Interfaces and Recommendations
Pierre Guetschel, Sara Ahmadi, Michael Tangermann
Empowering Small-Scale Knowledge Graphs: A Strategy of Leveraging General-Purpose Knowledge Graphs for Enriched Embeddings
Albert Sawczyn, Jakub Binkowski, Piotr Bielak, Tomasz Kajdanowicz