Representation Learning
Representation learning aims to create meaningful and efficient data representations that capture underlying structure and facilitate downstream tasks like classification, prediction, and control. Current research focuses on developing robust and generalizable representations, often employing techniques like contrastive learning, transformers, and mixture-of-experts models, addressing challenges such as disentanglement, handling noisy or sparse data, and improving efficiency in multi-task and continual learning scenarios. These advancements have significant implications for various fields, improving the performance and interpretability of machine learning models across diverse applications, from recommendation systems to medical image analysis and causal inference.
Papers
Semi-Supervised Manifold Learning with Complexity Decoupled Chart Autoencoders
Stefan C. Schonsheck, Scott Mahan, Timo Klock, Alexander Cloninger, Rongjie Lai
Representation Learning of Knowledge Graph for Wireless Communication Networks
Shiwen He, Yeyu Ou, Liangpeng Wang, Hang Zhan, Peng Ren, Yongming Huang
Representation Learning for the Automatic Indexing of Sound Effects Libraries
Alison B. Ma, Alexander Lerch
Prompt Vision Transformer for Domain Generalization
Zangwei Zheng, Xiangyu Yue, Kai Wang, Yang You
Learning Program Representations with a Tree-Structured Transformer
Wenhan Wang, Kechi Zhang, Ge Li, Shangqing Liu, Anran Li, Zhi Jin, Yang Liu