Variational Auto
Variational autoencoders (VAEs) are probabilistic generative models aiming to learn a compressed, latent representation of data and subsequently reconstruct the original data from this representation. Current research focuses on improving VAE performance and robustness through techniques like incorporating uncertainty quantification, developing novel loss functions (e.g., information bottleneck principles), and integrating VAEs with other architectures such as self-organizing maps, diffusion models, and generative adversarial networks to address challenges like disentanglement, anomaly detection, and continual learning. This work has significant implications across diverse fields, including image processing, speech recognition, drug discovery, and anomaly detection in various applications, by enabling efficient data representation, generation, and analysis.
Papers
Identifying latent state transition in non-linear dynamical systems
Çağlar Hızlı, Çağatay Yıldız, Matthias Bethge, ST John, Pekka Marttinen
Addressing Index Collapse of Large-Codebook Speech Tokenizer with Dual-Decoding Product-Quantized Variational Auto-Encoder
Haohan Guo, Fenglong Xie, Dongchao Yang, Hui Lu, Xixin Wu, Helen Meng