Latent Space
Latent space refers to a lower-dimensional representation of high-dimensional data, aiming to capture essential features while reducing computational complexity and improving interpretability. Current research focuses on developing efficient algorithms and model architectures, such as variational autoencoders (VAEs), generative adversarial networks (GANs), and diffusion models, to learn and manipulate these latent spaces for tasks ranging from anomaly detection and image generation to controlling generative models and improving the efficiency of autonomous systems. This work has significant implications across diverse fields, enabling advancements in areas like drug discovery, autonomous driving, and cybersecurity through improved data analysis, model efficiency, and enhanced control over generative processes.
Papers
Interpretable Representation Learning of Cardiac MRI via Attribute Regularization
Maxime Di Folco, Cosmin I. Bercea, Emily Chan, Julia A. Schnabel
LAFMA: A Latent Flow Matching Model for Text-to-Audio Generation
Wenhao Guan, Kaidi Wang, Wangjin Zhou, Yang Wang, Feng Deng, Hui Wang, Lin Li, Qingyang Hong, Yong Qin
Nomic Embed Vision: Expanding the Latent Space
Zach Nussbaum, Brandon Duderstadt, Andriy Mulyar
Latent Neural Operator for Solving Forward and Inverse PDE Problems
Tian Wang, Chuang Wang
Spherinator and HiPSter: Representation Learning for Unbiased Knowledge Discovery from Simulations
Kai L. Polsterer, Bernd Doser, Andreas Fehlner, Sebastian Trujillo-Gomez