Representation Learning
Representation learning aims to create meaningful and efficient data representations that capture underlying structure and facilitate downstream tasks like classification, prediction, and control. Current research focuses on developing robust and generalizable representations, often employing techniques like contrastive learning, transformers, and mixture-of-experts models, addressing challenges such as disentanglement, handling noisy or sparse data, and improving efficiency in multi-task and continual learning scenarios. These advancements have significant implications for various fields, improving the performance and interpretability of machine learning models across diverse applications, from recommendation systems to medical image analysis and causal inference.
Papers
TimeDRL: Disentangled Representation Learning for Multivariate Time-Series
Ching Chang, Chiao-Tung Chan, Wei-Yao Wang, Wen-Chih Peng, Tien-Fu Chen
Series2Vec: Similarity-based Self-supervised Representation Learning for Time Series Classification
Navid Mohammadi Foumani, Chang Wei Tan, Geoffrey I. Webb, Hamid Rezatofighi, Mahsa Salehi
PointMoment:Mixed-Moment-based Self-Supervised Representation Learning for 3D Point Clouds
Xin Cao, Xinxin Han, Yifan Wang, Mengna Yang, Kang Li
PointJEM: Self-supervised Point Cloud Understanding for Reducing Feature Redundancy via Joint Entropy Maximization
Xin Cao, Huan Xia, Xinxin Han, Yifan Wang, Kang Li, Linzhi Su
Representation Learning in a Decomposed Encoder Design for Bio-inspired Hebbian Learning
Achref Jaziri, Sina Ditzel, Iuliia Pliushch, Visvanathan Ramesh
White-Box Transformers via Sparse Rate Reduction: Compression Is All There Is?
Yaodong Yu, Sam Buchanan, Druv Pai, Tianzhe Chu, Ziyang Wu, Shengbang Tong, Hao Bai, Yuexiang Zhai, Benjamin D. Haeffele, Yi Ma