Representation Learning
Representation learning aims to create meaningful and efficient data representations that capture underlying structure and facilitate downstream tasks like classification, prediction, and control. Current research focuses on developing robust and generalizable representations, often employing techniques like contrastive learning, transformers, and mixture-of-experts models, addressing challenges such as disentanglement, handling noisy or sparse data, and improving efficiency in multi-task and continual learning scenarios. These advancements have significant implications for various fields, improving the performance and interpretability of machine learning models across diverse applications, from recommendation systems to medical image analysis and causal inference.
Papers
Identifiable Exchangeable Mechanisms for Causal Structure and Representation Learning
Patrik Reizinger, Siyuan Guo, Ferenc Huszár, Bernhard Schölkopf, Wieland Brendel
Similarity-aware Syncretic Latent Diffusion Model for Medical Image Translation with Representation Learning
Tingyi Lin, Pengju Lyu, Jie Zhang, Yuqing Wang, Cheng Wang, Jianjun Zhu
Transferable Tactile Transformers for Representation Learning Across Diverse Sensors and Tasks
Jialiang Zhao, Yuxiang Ma, Lirui Wang, Edward H. Adelson
Effective Edge-wise Representation Learning in Edge-Attributed Bipartite Graphs
Hewen Wang, Renchi Yang, Xiaokui Xiao
Towards Trustworthy Unsupervised Domain Adaptation: A Representation Learning Perspective for Enhancing Robustness, Discrimination, and Generalization
Jia-Li Yin, Haoyuan Zheng, Ximeng Liu
Semantic Graph Consistency: Going Beyond Patches for Regularizing Self-Supervised Vision Transformers
Chaitanya Devaguptapu, Sumukh Aithal, Shrinivas Ramasubramanian, Moyuru Yamada, Manohar Kaul
VIRL: Volume-Informed Representation Learning towards Few-shot Manufacturability Estimation
Yu-hsuan Chen, Jonathan Cagan, Levent Burak kara
UniGLM: Training One Unified Language Model for Text-Attributed Graphs
Yi Fang, Dongzhe Fan, Sirui Ding, Ninghao Liu, Qiaoyu Tan
Learning sum of diverse features: computational hardness and efficient gradient-based training for ridge combinations
Kazusato Oko, Yujin Song, Taiji Suzuki, Denny Wu
Enhancing Generalizability of Representation Learning for Data-Efficient 3D Scene Understanding
Yunsong Wang, Na Zhao, Gim Hee Lee
An Interpretable Alternative to Neural Representation Learning for Rating Prediction -- Transparent Latent Class Modeling of User Reviews
Giuseppe Serra, Peter Tino, Zhao Xu, Xin Yao
Benchmarking Vision-Language Contrastive Methods for Medical Representation Learning
Shuvendu Roy, Yasaman Parhizkar, Franklin Ogidi, Vahid Reza Khazaie, Michael Colacci, Ali Etemad, Elham Dolatabadi, Arash Afkanpour
Visual Representation Learning with Stochastic Frame Prediction
Huiwon Jang, Dongyoung Kim, Junsu Kim, Jinwoo Shin, Pieter Abbeel, Younggyo Seo