Variational Autoencoders
Variational Autoencoders (VAEs) are generative models that learn a compressed, latent representation of data, aiming to reconstruct the original data from this representation while also learning the underlying data distribution. Current research focuses on improving VAE architectures for specific tasks, such as image generation and anomaly detection, exploring variations like conditional VAEs, hierarchical VAEs, and those incorporating techniques like vector quantization or diffusion models to enhance performance and interpretability. This work is significant because VAEs offer a powerful framework for unsupervised learning, enabling applications in diverse fields ranging from image processing and molecular design to anomaly detection and causal inference.
Papers
Optimizing Training Trajectories in Variational Autoencoders via Latent Bayesian Optimization Approach
Arpan Biswas, Rama Vasudevan, Maxim Ziatdinov, Sergei V. Kalinin
Anomaly Detection in Echocardiograms with Dynamic Variational Trajectory Models
Alain Ryser, Laura Manduchi, Fabian Laumer, Holger Michel, Sven Wellmann, Julia E. Vogt
Laplacian Autoencoders for Learning Stochastic Representations
Marco Miani, Frederik Warburg, Pablo Moreno-Muñoz, Nicke Skafte Detlefsen, Søren Hauberg