Variational Autoencoders
Variational Autoencoders (VAEs) are generative models that learn a compressed, latent representation of data, aiming to reconstruct the original data from this representation while also learning the underlying data distribution. Current research focuses on improving VAE architectures for specific tasks, such as image generation and anomaly detection, exploring variations like conditional VAEs, hierarchical VAEs, and those incorporating techniques like vector quantization or diffusion models to enhance performance and interpretability. This work is significant because VAEs offer a powerful framework for unsupervised learning, enabling applications in diverse fields ranging from image processing and molecular design to anomaly detection and causal inference.
Papers
Likelihood-Free Variational Autoencoders
Chen Xu, Qiang Wang, Lijun SunBeijing University of Posts and Telecommunications●McGill UniversityEnhancing Variational Autoencoders with Smooth Robust Latent Encoding
Hyomin Lee, Minseon Kim, Sangwon Jang, Jongheon Jeong, Sung Ju HwangKorea University●Microsoft●KAIST●DeepAuto.ai
Application of Deep Generative Models for Anomaly Detection in Complex Financial Transactions
Tengda Tang, Jianhua Yao, Yixian Wang, Qiuwu Sha, Hanrui Feng, Zhen XuUniversity of Michigan●Trine University●The University of Chicago●Columbia University●Independent ResearcherLatent Bayesian Optimization via Autoregressive Normalizing Flows
Seunghun Lee, Jinyoung Park, Jaewon Chu, Minseo Yoon, Hyunwoo J. KimKorea University●KAIST