Representation Learning
Representation learning aims to create meaningful and efficient data representations that capture underlying structure and facilitate downstream tasks like classification, prediction, and control. Current research focuses on developing robust and generalizable representations, often employing techniques like contrastive learning, transformers, and mixture-of-experts models, addressing challenges such as disentanglement, handling noisy or sparse data, and improving efficiency in multi-task and continual learning scenarios. These advancements have significant implications for various fields, improving the performance and interpretability of machine learning models across diverse applications, from recommendation systems to medical image analysis and causal inference.
Papers
ViTally Consistent: Scaling Biological Representation Learning for Cell Microscopy
Kian Kenyon-Dean, Zitong Jerry Wang, John Urbanik, Konstantin Donhauser, Jason Hartford, Saber Saberian, Nil Sahin, Ihab Bendidi, Safiye Celik, Marta Fay, Juan Sebastian Rodriguez Vera, Imran S Haque, Oren Kraus
Revisiting K-mer Profile for Effective and Scalable Genome Representation Learning
Abdulkadir Celikkanat, Andres R. Masegosa, Thomas D. Nielsen
Disentangling Disentangled Representations: Towards Improved Latent Units via Diffusion Models
Youngjun Jun, Jiwoo Park, Kyobin Choo, Tae Eun Choi, Seong Jae Hwang
Exploring Consistency in Graph Representations:from Graph Kernels to Graph Neural Networks
Xuyuan Liu, Yinghao Cai, Qihui Yang, Yujun Yan
Language-guided Hierarchical Fine-grained Image Forgery Detection and Localization
Xiao Guo, Xiaohong Liu, Iacopo Masi, Xiaoming Liu
Enhance Hyperbolic Representation Learning via Second-order Pooling
Kun Song, Ruben Solozabal, Li hao, Lu Ren, Moloud Abdar, Qing Li, Fakhri Karray, Martin Takac
SimSiam Naming Game: A Unified Approach for Representation Learning and Emergent Communication
Nguyen Le Hoang, Tadahiro Taniguchi, Fang Tianwei, Akira Taniguchi