Latent Space
Latent space refers to a lower-dimensional representation of high-dimensional data, aiming to capture essential features while reducing computational complexity and improving interpretability. Current research focuses on developing efficient algorithms and model architectures, such as variational autoencoders (VAEs), generative adversarial networks (GANs), and diffusion models, to learn and manipulate these latent spaces for tasks ranging from anomaly detection and image generation to controlling generative models and improving the efficiency of autonomous systems. This work has significant implications across diverse fields, enabling advancements in areas like drug discovery, autonomous driving, and cybersecurity through improved data analysis, model efficiency, and enhanced control over generative processes.
Papers
GiGaMAE: Generalizable Graph Masked Autoencoder via Collaborative Latent Space Reconstruction
Yucheng Shi, Yushun Dong, Qiaoyu Tan, Jundong Li, Ninghao Liu
From Hope to Safety: Unlearning Biases of Deep Models via Gradient Penalization in Latent Space
Maximilian Dreyer, Frederik Pahde, Christopher J. Anders, Wojciech Samek, Sebastian Lapuschkin
Investigating and Improving Latent Density Segmentation Models for Aleatoric Uncertainty Quantification in Medical Imaging
M. M. Amaan Valiuddin, Christiaan G. A. Viviers, Ruud J. G. van Sloun, Peter H. N. de With, Fons van der Sommen
Statistically Optimal Generative Modeling with Maximum Deviation from the Empirical Distribution
Elen Vardanyan, Sona Hunanyan, Tigran Galstyan, Arshak Minasyan, Arnak Dalalyan
A New Deep State-Space Analysis Framework for Patient Latent State Estimation and Classification from EHR Time Series Data
Aya Nakamura, Ryosuke Kojima, Yuji Okamoto, Eiichiro Uchino, Yohei Mineharu, Yohei Harada, Mayumi Kamada, Manabu Muto, Motoko Yanagita, Yasushi Okuno
LatentAugment: Data Augmentation via Guided Manipulation of GAN's Latent Space
Lorenzo Tronchin, Minh H. Vu, Paolo Soda, Tommy Löfstedt