Representation Learning
Representation learning aims to create meaningful and efficient data representations that capture underlying structure and facilitate downstream tasks like classification, prediction, and control. Current research focuses on developing robust and generalizable representations, often employing techniques like contrastive learning, transformers, and mixture-of-experts models, addressing challenges such as disentanglement, handling noisy or sparse data, and improving efficiency in multi-task and continual learning scenarios. These advancements have significant implications for various fields, improving the performance and interpretability of machine learning models across diverse applications, from recommendation systems to medical image analysis and causal inference.
Papers
Downlink CCM Estimation via Representation Learning with Graph Regularization
Melih Can Zerin, Elif Vural, Ali Özgür Yılmaz
Graph-based Unsupervised Disentangled Representation Learning via Multimodal Large Language Models
Baao Xie, Qiuyu Chen, Yunnan Wang, Zequn Zhang, Xin Jin, Wenjun Zeng
Contrastive Learning of Asset Embeddings from Financial Time Series
Rian Dolphin, Barry Smyth, Ruihai Dong
Disentangled Representation Learning with the Gromov-Monge Gap
Théo Uscidda, Luca Eyring, Karsten Roth, Fabian Theis, Zeynep Akata, Marco Cuturi
Deep-Graph-Sprints: Accelerated Representation Learning in Continuous-Time Dynamic Graphs
Ahmad Naser Eddin, Jacopo Bono, David Aparício, Hugo Ferreira, Pedro Ribeiro, Pedro Bizarro
A Coding-Theoretic Analysis of Hyperspherical Prototypical Learning Geometry
Martin Lindström, Borja Rodríguez-Gálvez, Ragnar Thobaben, Mikael Skoglund
Unity in Diversity: Multi-expert Knowledge Confrontation and Collaboration for Generalizable Vehicle Re-identification
Zhenyu Kuang, Hongyang Zhang, Lidong Cheng, Yinhao Liu, Yue Huang, Xinghao Ding