Variational Autoencoder
Variational Autoencoders (VAEs) are generative models aiming to learn a compressed, lower-dimensional representation (latent space) of input data, allowing for both data reconstruction and generation of new samples. Current research focuses on improving VAE architectures, such as incorporating beta-VAEs for better disentanglement of latent features, and integrating them with other techniques like large language models, vision transformers, and diffusion models to enhance performance in specific applications. This versatility makes VAEs valuable across diverse fields, including image processing, anomaly detection, materials science, and even astrodynamics, by enabling efficient data analysis, feature extraction, and generation of synthetic data where real data is scarce or expensive to obtain.
Papers
Unsupervised Multiple Domain Translation through Controlled Disentanglement in Variational Autoencoder
Antonio Almudévar, Théo Mariotte, Alfonso Ortega, Marie Tahon
CFASL: Composite Factor-Aligned Symmetry Learning for Disentanglement in Variational AutoEncoder
Hee-Jun Jung, Jaehyoung Jeong, Kangil Kim
Generative Model-Driven Synthetic Training Image Generation: An Approach to Cognition in Rail Defect Detection
Rahatara Ferdousi, Chunsheng Yang, M. Anwar Hossain, Fedwa Laamarti, M. Shamim Hossain, Abdulmotaleb El Saddik
HQ-VAE: Hierarchical Discrete Representation Learning with Variational Bayes
Yuhta Takida, Yukara Ikemiya, Takashi Shibuya, Kazuki Shimada, Woosung Choi, Chieh-Hsin Lai, Naoki Murata, Toshimitsu Uesaka, Kengo Uchida, Wei-Hsiang Liao, Yuki Mitsufuji