Variational Autoencoder
Variational Autoencoders (VAEs) are generative models aiming to learn a compressed, lower-dimensional representation (latent space) of input data, allowing for both data reconstruction and generation of new samples. Current research focuses on improving VAE architectures, such as incorporating beta-VAEs for better disentanglement of latent features, and integrating them with other techniques like large language models, vision transformers, and diffusion models to enhance performance in specific applications. This versatility makes VAEs valuable across diverse fields, including image processing, anomaly detection, materials science, and even astrodynamics, by enabling efficient data analysis, feature extraction, and generation of synthetic data where real data is scarce or expensive to obtain.
Papers
Variational Autoencoder with Disentanglement Priors for Low-Resource Task-Specific Natural Language Generation
Zhuang Li, Lizhen Qu, Qiongkai Xu, Tongtong Wu, Tianyang Zhan, Gholamreza Haffari
Overlooked Implications of the Reconstruction Loss for VAE Disentanglement
Nathan Michlo, Richard Klein, Steven James
Variational Autoencoder based Metamodeling for Multi-Objective Topology Optimization of Electrical Machines
Vivek Parekh, Dominik Flore, Sebastian Schöps
SegTransVAE: Hybrid CNN -- Transformer with Regularization for medical image segmentation
Quan-Dung Pham, Hai Nguyen-Truong, Nam Nguyen Phuong, Khoa N. A. Nguyen