Representation Learning
Representation learning aims to create meaningful and efficient data representations that capture underlying structure and facilitate downstream tasks like classification, prediction, and control. Current research focuses on developing robust and generalizable representations, often employing techniques like contrastive learning, transformers, and mixture-of-experts models, addressing challenges such as disentanglement, handling noisy or sparse data, and improving efficiency in multi-task and continual learning scenarios. These advancements have significant implications for various fields, improving the performance and interpretability of machine learning models across diverse applications, from recommendation systems to medical image analysis and causal inference.
Papers
TimewarpVAE: Simultaneous Time-Warping and Representation Learning of Trajectories
Travers Rhodes, Daniel D. Lee
Representation Learning with Large Language Models for Recommendation
Xubin Ren, Wei Wei, Lianghao Xia, Lixin Su, Suqi Cheng, Junfeng Wang, Dawei Yin, Chao Huang
State Sequences Prediction via Fourier Transform for Representation Learning
Mingxuan Ye, Yufei Kuang, Jie Wang, Rui Yang, Wengang Zhou, Houqiang Li, Feng Wu
UniMAP: Universal SMILES-Graph Representation Learning
Shikun Feng, Lixin Yang, Yanwen Huang, Yuyan Ni, Weiying Ma, Yanyan Lan
Learning to Discern: Imitating Heterogeneous Human Demonstrations with Preference and Representation Learning
Sachit Kuhar, Shuo Cheng, Shivang Chopra, Matthew Bronars, Danfei Xu