Representation Learning
Representation learning aims to create meaningful and efficient data representations that capture underlying structure and facilitate downstream tasks like classification, prediction, and control. Current research focuses on developing robust and generalizable representations, often employing techniques like contrastive learning, transformers, and mixture-of-experts models, addressing challenges such as disentanglement, handling noisy or sparse data, and improving efficiency in multi-task and continual learning scenarios. These advancements have significant implications for various fields, improving the performance and interpretability of machine learning models across diverse applications, from recommendation systems to medical image analysis and causal inference.
Papers
Hyperbolic Representation Learning: Revisiting and Advancing
Menglin Yang, Min Zhou, Rex Ying, Yankai Chen, Irwin King
Exploring the Application of Large-scale Pre-trained Models on Adverse Weather Removal
Zhentao Tan, Yue Wu, Qiankun Liu, Qi Chu, Le Lu, Jieping Ye, Nenghai Yu
Active Representation Learning for General Task Space with Applications in Robotics
Yifang Chen, Yingbing Huang, Simon S. Du, Kevin Jamieson, Guanya Shi
Advancing Volumetric Medical Image Segmentation via Global-Local Masked Autoencoder
Jia-Xin Zhuang, Luyang Luo, Hao Chen
Correlated Time Series Self-Supervised Representation Learning via Spatiotemporal Bootstrapping
Luxuan Wang, Lei Bai, Ziyue Li, Rui Zhao, Fugee Tsung
CARL-G: Clustering-Accelerated Representation Learning on Graphs
William Shiao, Uday Singh Saini, Yozen Liu, Tong Zhao, Neil Shah, Evangelos E. Papalexakis
Deep denoising autoencoder-based non-invasive blood flow detection for arteriovenous fistula
Li-Chin Chen, Yi-Heng Lin, Li-Ning Peng, Feng-Ming Wang, Yu-Hsin Chen, Po-Hsun Huang, Shang-Feng Yang, Yu Tsao
Spatial Implicit Neural Representations for Global-Scale Species Mapping
Elijah Cole, Grant Van Horn, Christian Lange, Alexander Shepard, Patrick Leary, Pietro Perona, Scott Loarie, Oisin Mac Aodha
Improved Active Multi-Task Representation Learning via Lasso
Yiping Wang, Yifang Chen, Kevin Jamieson, Simon S. Du