Variational Autoencoder
Variational Autoencoders (VAEs) are generative models aiming to learn a compressed, lower-dimensional representation (latent space) of input data, allowing for both data reconstruction and generation of new samples. Current research focuses on improving VAE architectures, such as incorporating beta-VAEs for better disentanglement of latent features, and integrating them with other techniques like large language models, vision transformers, and diffusion models to enhance performance in specific applications. This versatility makes VAEs valuable across diverse fields, including image processing, anomaly detection, materials science, and even astrodynamics, by enabling efficient data analysis, feature extraction, and generation of synthetic data where real data is scarce or expensive to obtain.
Papers
$t^3$-Variational Autoencoder: Learning Heavy-tailed Data with Student's t and Power Divergence
Juno Kim, Jaehyuk Kwon, Mincheol Cho, Hyunjong Lee, Joong-Ho Won
Improving Normative Modeling for Multi-modal Neuroimaging Data using mixture-of-product-of-experts variational autoencoders
Sayantan Kumar, Philip Payne, Aristeidis Sotiras
SpACNN-LDVAE: Spatial Attention Convolutional Latent Dirichlet Variational Autoencoder for Hyperspectral Pixel Unmixing
Soham Chitnis, Kiran Mantripragada, Faisal Z. Qureshi
Utilizing VQ-VAE for End-to-End Health Indicator Generation in Predicting Rolling Bearing RUL
Junliang Wang, Qinghua Zhang, Guanhua Zhu, Guoxi Sun