Variational Autoencoders
Variational Autoencoders (VAEs) are generative models that learn a compressed, latent representation of data, aiming to reconstruct the original data from this representation while also learning the underlying data distribution. Current research focuses on improving VAE architectures for specific tasks, such as image generation and anomaly detection, exploring variations like conditional VAEs, hierarchical VAEs, and those incorporating techniques like vector quantization or diffusion models to enhance performance and interpretability. This work is significant because VAEs offer a powerful framework for unsupervised learning, enabling applications in diverse fields ranging from image processing and molecular design to anomaly detection and causal inference.
Papers
Combining propensity score methods with variational autoencoders for generating synthetic data in presence of latent sub-groups
Kiana Farhadyar, Federico Bonofiglio, Maren Hackenberg, Daniela Zoeller, Harald Binder
Predictive variational autoencoder for learning robust representations of time-series data
Julia Huiming Wang, Dexter Tsin, Tatiana Engel
End-to-end autoencoding architecture for the simultaneous generation of medical images and corresponding segmentation masks
Aghiles Kebaili, Jérôme Lapuyade-Lahorgue, Pierre Vera, Su Ruan
Advancements in Generative AI: A Comprehensive Review of GANs, GPT, Autoencoders, Diffusion Model, and Transformers
Staphord Bengesi, Hoda El-Sayed, Md Kamruzzaman Sarker, Yao Houkpati, John Irungu, Timothy Oladunni