Variational Autoencoders
Variational Autoencoders (VAEs) are generative models that learn a compressed, latent representation of data, aiming to reconstruct the original data from this representation while also learning the underlying data distribution. Current research focuses on improving VAE architectures for specific tasks, such as image generation and anomaly detection, exploring variations like conditional VAEs, hierarchical VAEs, and those incorporating techniques like vector quantization or diffusion models to enhance performance and interpretability. This work is significant because VAEs offer a powerful framework for unsupervised learning, enabling applications in diverse fields ranging from image processing and molecular design to anomaly detection and causal inference.
Papers
Cooperation in the Latent Space: The Benefits of Adding Mixture Components in Variational Autoencoders
Oskar Kviman, Ricky Molén, Alexandra Hotti, Semih Kurt, Víctor Elvira, Jens Lagergren
Disentanglement with Biological Constraints: A Theory of Functional Cell Types
James C. R. Whittington, Will Dorrell, Surya Ganguli, Timothy E. J. Behrens
Leveraging variational autoencoders for multiple data imputation
Breeshey Roskams-Hieter, Jude Wells, Sara Wade
FONDUE: an algorithm to find the optimal dimensionality of the latent representations of variational autoencoders
Lisa Bonheme, Marek Grzes
Learning to Drop Out: An Adversarial Approach to Training Sequence VAEs
Đorđe Miladinović, Kumar Shridhar, Kushal Jain, Max B. Paulus, Joachim M. Buhmann, Mrinmaya Sachan, Carl Allen