Variational Autoencoders
Variational Autoencoders (VAEs) are generative models that learn a compressed, latent representation of data, aiming to reconstruct the original data from this representation while also learning the underlying data distribution. Current research focuses on improving VAE architectures for specific tasks, such as image generation and anomaly detection, exploring variations like conditional VAEs, hierarchical VAEs, and those incorporating techniques like vector quantization or diffusion models to enhance performance and interpretability. This work is significant because VAEs offer a powerful framework for unsupervised learning, enabling applications in diverse fields ranging from image processing and molecular design to anomaly detection and causal inference.
Papers
Dynamic User Interface Generation for Enhanced Human-Computer Interaction Using Variational Autoencoders
Runsheng Zhang (1), Shixiao Wang (2), Tianfang Xie (3), Shiyu Duan (4), Mengmeng Chen (5) ((1) University of Southern California, (2) School of Visual Arts, (3) Georgia Institute of Technology, (4) Carnegie Mellon University (5) New York University)
Enhancing Diffusion Models for High-Quality Image Generation
Jaineet Shah, Michael Gromis, Rickston Pinto
STORM: A Spatio-Temporal Factor Model Based on Dual Vector Quantized Variational Autoencoders for Financial Trading
Yilei Zhao, Wentao Zhang, Tingran Yang, Yong Jiang, Fei Huang, Wei Yang Bryan Lim
Dimensionality Reduction Techniques for Global Bayesian Optimisation
Luo Long, Coralia Cartis, Paz Fink Shustin