VAE Model
Variational Autoencoders (VAEs) are probabilistic generative models aiming to learn the underlying distribution of data by encoding it into a lower-dimensional latent space and then decoding it back to the original space. Current research focuses on improving VAE performance through architectural enhancements, such as hierarchical structures and the integration of other models like diffusion models and transformers, as well as addressing challenges like posterior collapse and efficient training for high-dimensional data. VAEs find applications across diverse fields, including recommendation systems, single-cell analysis, time series forecasting, and image generation, demonstrating their significance for both theoretical advancements in machine learning and practical problem-solving.
Papers
CRADLE-VAE: Enhancing Single-Cell Gene Perturbation Modeling with Counterfactual Reasoning-based Artifact Disentanglement
Seungheun Baek, Soyon Park, Yan Ting Chok, Junhyun Lee, Jueon Park, Mogan Gim, Jaewoo Kang
On the Convergence Analysis of Over-Parameterized Variational Autoencoders: A Neural Tangent Kernel Perspective
Li Wang, Wei Huang