Variational Autoencoder
Variational Autoencoders (VAEs) are generative models aiming to learn a compressed, lower-dimensional representation (latent space) of input data, allowing for both data reconstruction and generation of new samples. Current research focuses on improving VAE architectures, such as incorporating beta-VAEs for better disentanglement of latent features, and integrating them with other techniques like large language models, vision transformers, and diffusion models to enhance performance in specific applications. This versatility makes VAEs valuable across diverse fields, including image processing, anomaly detection, materials science, and even astrodynamics, by enabling efficient data analysis, feature extraction, and generation of synthetic data where real data is scarce or expensive to obtain.
Papers
$\beta$-Variational autoencoders and transformers for reduced-order modelling of fluid flows
Alberto Solera-Rico, Carlos Sanmiguel Vila, M. A. Gómez, Yuning Wang, Abdulrahman Almashjary, Scott T. M. Dawson, Ricardo Vinuesa
Toward Unsupervised 3D Point Cloud Anomaly Detection using Variational Autoencoder
Mana Masuda, Ryo Hachiuma, Ryo Fujii, Hideo Saito, Yusuke Sekikawa
Joint optimization of a $\beta$-VAE for ECG task-specific feature extraction
Viktor van der Valk, Douwe Atsma, Roderick Scherptong, Marius Staring
The Wyner Variational Autoencoder for Unsupervised Multi-Layer Wireless Fingerprinting
Teng-Hui Huang, Thilini Dahanayaka, Kanchana Thilakarathna, Philip H. W. Leong, Hesham El Gamal
Variantional autoencoder with decremental information bottleneck for disentanglement
Jiantao Wu, Shentong Mo, Xiang Yang, Muhammad Awais, Sara Atito, Xingshen Zhang, Lin Wang, Xiang Yang
Encoding Binary Concepts in the Latent Space of Generative Models for Enhancing Data Representation
Zizhao Hu, Mohammad Rostami