Representation Learning
Representation learning aims to create meaningful and efficient data representations that capture underlying structure and facilitate downstream tasks like classification, prediction, and control. Current research focuses on developing robust and generalizable representations, often employing techniques like contrastive learning, transformers, and mixture-of-experts models, addressing challenges such as disentanglement, handling noisy or sparse data, and improving efficiency in multi-task and continual learning scenarios. These advancements have significant implications for various fields, improving the performance and interpretability of machine learning models across diverse applications, from recommendation systems to medical image analysis and causal inference.
Papers
VisDiff: SDF-Guided Polygon Generation for Visibility Reconstruction and Recognition
Rahul Moorthy, Volkan Isler
Hyper-Representations: Learning from Populations of Neural Networks
Konstantin Schürholt
FreSh: Frequency Shifting for Accelerated Neural Representation Learning
Adam Kania, Marko Mihajlovic, Sergey Prokudin, Jacek Tabor, Przemysław Spurek
A Strategy for Label Alignment in Deep Neural Networks
Xuanrui Zeng
Robustness Reprogramming for Representation Learning
Zhichao Hou, MohamadAli Torkamani, Hamid Krim, Xiaorui Liu
LRHP: Learning Representations for Human Preferences via Preference Pairs
Chenglong Wang, Yang Gan, Yifu Huo, Yongyu Mu, Qiaozhi He, Murun Yang, Tong Xiao, Chunliang Zhang, Tongran Liu, Jingbo Zhu
MLP-KAN: Unifying Deep Representation and Function Learning
Yunhong He, Yifeng Xie, Zhengqing Yuan, Lichao Sun
Learning a Fast Mixing Exogenous Block MDP using a Single Trajectory
Alexander Levine, Peter Stone, Amy Zhang
Long-Sequence Recommendation Models Need Decoupled Embeddings
Ningya Feng, Junwei Pan, Jialong Wu, Baixu Chen, Ximei Wang, Qian Li, Xian Hu, Jie Jiang, Mingsheng Long
See Detail Say Clear: Towards Brain CT Report Generation via Pathological Clue-driven Representation Learning
Chengxin Zheng, Junzhong Ji, Yanzhao Shi, Xiaodan Zhang, Liangqiong Qu
Focus On What Matters: Separated Models For Visual-Based RL Generalization
Di Zhang, Bowen Lv, Hai Zhang, Feifan Yang, Junqiao Zhao, Hang Yu, Chang Huang, Hongtu Zhou, Chen Ye, Changjun Jiang
DMC-VB: A Benchmark for Representation Learning for Control with Visual Distractors
Joseph Ortiz, Antoine Dedieu, Wolfgang Lehrach, Swaroop Guntupalli, Carter Wendelken, Ahmad Humayun, Guangyao Zhou, Sivaramakrishnan Swaminathan, Miguel Lázaro-Gredilla, Kevin Murphy
Transferring disentangled representations: bridging the gap between synthetic and real images
Jacopo Dapueto, Nicoletta Noceti, Francesca Odone
Efficient Fairness-Performance Pareto Front Computation
Mark Kozdoba, Binyamin Perets, Shie Mannor