Representation Learning
Representation learning aims to create meaningful and efficient data representations that capture underlying structure and facilitate downstream tasks like classification, prediction, and control. Current research focuses on developing robust and generalizable representations, often employing techniques like contrastive learning, transformers, and mixture-of-experts models, addressing challenges such as disentanglement, handling noisy or sparse data, and improving efficiency in multi-task and continual learning scenarios. These advancements have significant implications for various fields, improving the performance and interpretability of machine learning models across diverse applications, from recommendation systems to medical image analysis and causal inference.
Papers
Learning Representations for Reasoning: Generalizing Across Diverse Structures
Zhaocheng Zhu
Explanation-Preserving Augmentation for Semi-Supervised Graph Representation Learning
Zhuomin Chen, Jingchao Ni, Hojat Allah Salehi, Xu Zheng, Esteban Schafir, Farhad Shirani, Dongsheng Luo
Self-Supervised Learning of Disentangled Representations for Multivariate Time-Series
Ching Chang, Chiao-Tung Chan, Wei-Yao Wang, Wen-Chih Peng, Tien-Fu Chen
Just-In-Time Software Defect Prediction via Bi-modal Change Representation Learning
Yuze Jiang, Beijun Shen, Xiaodong Gu
FedCCRL: Federated Domain Generalization with Cross-Client Representation Learning
Xinpeng Wang, Yongxin Guo, Xiaoying Tang
SplitSEE: A Splittable Self-supervised Framework for Single-Channel EEG Representation Learning
Rikuto Kotoge, Zheng Chen, Tasuku Kimura, Yasuko Matsubara, Takufumi Yanagisawa, Haruhiko Kishima, Yasushi Sakurai
Representation Learning for Regime detection in Block Hierarchical Financial Markets
Alexa Orton, Tim Gebbie
Enhancing JEPAs with Spatial Conditioning: Robust and Efficient Representation Learning
Etai Littwin, Vimal Thilak, Anand Gopalakrishnan
StatioCL: Contrastive Learning for Time Series via Non-Stationary and Temporal Contrast
Yu Wu, Ting Dang, Dimitris Spathis, Hong Jia, Cecilia Mascolo
On Discriminative Probabilistic Modeling for Self-Supervised Representation Learning
Bokun Wang, Yunwen Lei, Yiming Ying, Tianbao Yang
Learning Representations of Instruments for Partial Identification of Treatment Effects
Jonas Schweisthal, Dennis Frauen, Maresa Schröder, Konstantin Hess, Niki Kilbertus, Stefan Feuerriegel
SPA: 3D Spatial-Awareness Enables Effective Embodied Representation
Haoyi Zhu, Honghui Yang, Yating Wang, Jiange Yang, Limin Wang, Tong He
Scalable Representation Learning for Multimodal Tabular Transactions
Natraj Raman, Sumitra Ganesh, Manuela Veloso
Learning to Compress: Local Rank and Information Compression in Deep Neural Networks
Niket Patel, Ravid Shwartz-Ziv
GR-2: A Generative Video-Language-Action Model with Web-Scale Knowledge for Robot Manipulation
Chi-Lam Cheang, Guangzeng Chen, Ya Jing, Tao Kong, Hang Li, Yifeng Li, Yuxiao Liu, Hongtao Wu, Jiafeng Xu, Yichu Yang, Hanbo Zhang, Minzhao Zhu
CLOSER: Towards Better Representation Learning for Few-Shot Class-Incremental Learning
Junghun Oh, Sungyong Baik, Kyoung Mu Lee