Latent Space
Latent space refers to a lower-dimensional representation of high-dimensional data, aiming to capture essential features while reducing computational complexity and improving interpretability. Current research focuses on developing efficient algorithms and model architectures, such as variational autoencoders (VAEs), generative adversarial networks (GANs), and diffusion models, to learn and manipulate these latent spaces for tasks ranging from anomaly detection and image generation to controlling generative models and improving the efficiency of autonomous systems. This work has significant implications across diverse fields, enabling advancements in areas like drug discovery, autonomous driving, and cybersecurity through improved data analysis, model efficiency, and enhanced control over generative processes.
Papers
Leveraging Scene Embeddings for Gradient-Based Motion Planning in Latent Space
Jun Yamada, Chia-Man Hung, Jack Collins, Ioannis Havoutis, Ingmar Posner
The Wasserstein Believer: Learning Belief Updates for Partially Observable Environments through Reliable Latent Space Models
Raphael Avalos, Florent Delgrange, Ann Nowé, Guillermo A. Pérez, Diederik M. Roijers
DeCap: Decoding CLIP Latents for Zero-Shot Captioning via Text-Only Training
Wei Li, Linchao Zhu, Longyin Wen, Yi Yang
Learning disentangled representations for explainable chest X-ray classification using Dirichlet VAEs
Rachael Harkness, Alejandro F Frangi, Kieran Zucker, Nishant Ravikumar
Probabilistic Contrastive Learning Recovers the Correct Aleatoric Uncertainty of Ambiguous Inputs
Michael Kirchhof, Enkelejda Kasneci, Seong Joon Oh
Linking data separation, visual separation, and classifier performance using pseudo-labeling by contrastive learning
Bárbara Caroline Benato, Alexandre Xavier Falcão, Alexandru-Cristian Telea