Latent Representation
Latent representation learning focuses on creating compact, informative summaries (latent representations) of complex data, aiming to capture essential features while discarding irrelevant details. Current research emphasizes developing effective methods for generating these representations, particularly using architectures like autoencoders, variational autoencoders (VAEs), Joint Embedding Predictive Architectures (JEPAs), and diffusion models, often within self-supervised or semi-supervised learning frameworks. These advancements are improving performance in various downstream tasks, including image classification, natural language processing, and medical image analysis, by enabling more efficient and robust model training and enhancing interpretability. The ability to learn meaningful latent representations is crucial for advancing machine learning across numerous scientific disciplines and practical applications.
Papers
Instruction Embedding: Latent Representations of Instructions Towards Task Identification
Yiwei Li, Jiayi Shi, Shaoxiong Feng, Peiwen Yuan, Xinglin Wang, Boyuan Pan, Heda Wang, Yao Hu, Kan Li
One Node Per User: Node-Level Federated Learning for Graph Neural Networks
Zhidong Gao, Yuanxiong Guo, Yanmin Gong
Individuation in Neural Models with and without Visual Grounding
Alexey Tikhonov, Lisa Bylinina, Ivan P. Yamshchikov
Latent Representation Learning for Multimodal Brain Activity Translation
Arman Afrasiyabi, Dhananjay Bhaskar, Erica L. Busch, Laurent Caplette, Rahul Singh, Guillaume Lajoie, Nicholas B. Turk-Browne, Smita Krishnaswamy