Representation Learning
Representation learning aims to create meaningful and efficient data representations that capture underlying structure and facilitate downstream tasks like classification, prediction, and control. Current research focuses on developing robust and generalizable representations, often employing techniques like contrastive learning, transformers, and mixture-of-experts models, addressing challenges such as disentanglement, handling noisy or sparse data, and improving efficiency in multi-task and continual learning scenarios. These advancements have significant implications for various fields, improving the performance and interpretability of machine learning models across diverse applications, from recommendation systems to medical image analysis and causal inference.
Papers
Diffusion Models and Representation Learning: A Survey
Michael Fuest, Pingchuan Ma, Ming Gui, Johannes S. Fischer, Vincent Tao Hu, Bjorn Ommer
PolygonGNN: Representation Learning for Polygonal Geometries with Heterogeneous Visibility Graph
Dazhou Yu, Yuntong Hu, Yun Li, Liang Zhao
Efficient Personalized Text-to-image Generation by Leveraging Textual Subspace
Shian Du, Xiaotian Cheng, Qi Qian, Henglu Wei, Yi Xu, Xiangyang Ji
Identifiable Exchangeable Mechanisms for Causal Structure and Representation Learning
Patrik Reizinger, Siyuan Guo, Ferenc Huszár, Bernhard Schölkopf, Wieland Brendel
Similarity-aware Syncretic Latent Diffusion Model for Medical Image Translation with Representation Learning
Tingyi Lin, Pengju Lyu, Jie Zhang, Yuqing Wang, Cheng Wang, Jianjun Zhu
Transferable Tactile Transformers for Representation Learning Across Diverse Sensors and Tasks
Jialiang Zhao, Yuxiang Ma, Lirui Wang, Edward H. Adelson
Effective Edge-wise Representation Learning in Edge-Attributed Bipartite Graphs
Hewen Wang, Renchi Yang, Xiaokui Xiao
Towards Trustworthy Unsupervised Domain Adaptation: A Representation Learning Perspective for Enhancing Robustness, Discrimination, and Generalization
Jia-Li Yin, Haoyuan Zheng, Ximeng Liu
Semantic Graph Consistency: Going Beyond Patches for Regularizing Self-Supervised Vision Transformers
Chaitanya Devaguptapu, Sumukh Aithal, Shrinivas Ramasubramanian, Moyuru Yamada, Manohar Kaul
VIRL: Volume-Informed Representation Learning towards Few-shot Manufacturability Estimation
Yu-hsuan Chen, Jonathan Cagan, Levent Burak kara
UniGLM: Training One Unified Language Model for Text-Attributed Graph Embedding
Yi Fang, Dongzhe Fan, Sirui Ding, Ninghao Liu, Qiaoyu Tan
Learning sum of diverse features: computational hardness and efficient gradient-based training for ridge combinations
Kazusato Oko, Yujin Song, Taiji Suzuki, Denny Wu
Enhancing Generalizability of Representation Learning for Data-Efficient 3D Scene Understanding
Yunsong Wang, Na Zhao, Gim Hee Lee
An Interpretable Alternative to Neural Representation Learning for Rating Prediction -- Transparent Latent Class Modeling of User Reviews
Giuseppe Serra, Peter Tino, Zhao Xu, Xin Yao