Variational Autoencoder
Variational Autoencoders (VAEs) are generative models aiming to learn a compressed, lower-dimensional representation (latent space) of input data, allowing for both data reconstruction and generation of new samples. Current research focuses on improving VAE architectures, such as incorporating beta-VAEs for better disentanglement of latent features, and integrating them with other techniques like large language models, vision transformers, and diffusion models to enhance performance in specific applications. This versatility makes VAEs valuable across diverse fields, including image processing, anomaly detection, materials science, and even astrodynamics, by enabling efficient data analysis, feature extraction, and generation of synthetic data where real data is scarce or expensive to obtain.
Papers
Mixture-of-experts VAEs can disregard variation in surjective multimodal data
Jannik Wolff, Tassilo Klein, Moin Nabi, Rahul G. Krishnan, Shinichi Nakajima
Structured Graph Variational Autoencoders for Indoor Furniture layout Generation
Aditya Chattopadhyay, Xi Zhang, David Paul Wipf, Himanshu Arora, Rene Vidal
Upmixing via style transfer: a variational autoencoder for disentangling spatial images and musical content
Haici Yang, Sanna Wager, Spencer Russell, Mike Luo, Minje Kim, Wontak Kim
Representation Uncertainty in Self-Supervised Learning as Variational Inference
Hiroki Nakamura, Masashi Okada, Tadahiro Taniguchi
Breast Cancer Induced Bone Osteolysis Prediction Using Temporal Variational Auto-Encoders
Wei Xiong, Neil Yeung, Shubo Wang, Haofu Liao, Liyun Wang, Jiebo Luo
Partitioning Image Representation in Contrastive Learning
Hyunsub Lee, Heeyoul Choi
Attri-VAE: attribute-based interpretable representations of medical images with variational autoencoders
Irem Cetin, Maialen Stephens, Oscar Camara, Miguel Angel Gonzalez Ballester