Representation Learning
Representation learning aims to create meaningful and efficient data representations that capture underlying structure and facilitate downstream tasks like classification, prediction, and control. Current research focuses on developing robust and generalizable representations, often employing techniques like contrastive learning, transformers, and mixture-of-experts models, addressing challenges such as disentanglement, handling noisy or sparse data, and improving efficiency in multi-task and continual learning scenarios. These advancements have significant implications for various fields, improving the performance and interpretability of machine learning models across diverse applications, from recommendation systems to medical image analysis and causal inference.
Papers
A Survey on Temporal Knowledge Graph: Representation Learning and Applications
Li Cai, Xin Mao, Yuhao Zhou, Zhaoguang Long, Changxu Wu, Man Lan
Run-time Introspection of 2D Object Detection in Automated Driving Systems Using Learning Representations
Hakan Yekta Yatbaz, Mehrdad Dianati, Konstantinos Koufos, Roger Woodman
Autoencoder-based General Purpose Representation Learning for Customer Embedding
Jan Henrik Bertrand, Jacopo Pio Gargano, Laurent Mombaerts, Jonathan Taws
DecisionNCE: Embodied Multimodal Representations via Implicit Preference Learning
Jianxiong Li, Jinliang Zheng, Yinan Zheng, Liyuan Mao, Xiao Hu, Sijie Cheng, Haoyi Niu, Jihao Liu, Yu Liu, Jingjing Liu, Ya-Qin Zhang, Xianyuan Zhan
LoRA+: Efficient Low Rank Adaptation of Large Models
Soufiane Hayou, Nikhil Ghosh, Bin Yu
KARL: Knowledge-Aware Retrieval and Representations aid Retention and Learning in Students
Matthew Shu, Nishant Balepur, Shi Feng, Jordan Boyd-Graber
Separating common from salient patterns with Contrastive Representation Learning
Robin Louiset, Edouard Duchesnay, Antoine Grigis, Pietro Gori