Variational Autoencoder
Variational Autoencoders (VAEs) are generative models aiming to learn a compressed, lower-dimensional representation (latent space) of input data, allowing for both data reconstruction and generation of new samples. Current research focuses on improving VAE architectures, such as incorporating beta-VAEs for better disentanglement of latent features, and integrating them with other techniques like large language models, vision transformers, and diffusion models to enhance performance in specific applications. This versatility makes VAEs valuable across diverse fields, including image processing, anomaly detection, materials science, and even astrodynamics, by enabling efficient data analysis, feature extraction, and generation of synthetic data where real data is scarce or expensive to obtain.
Papers
DiffLM: Controllable Synthetic Data Generation via Diffusion Language Models
Ying Zhou, Xinyao Wang, Yulei Niu, Yaojie Shen, Lexin Tang, Fan Chen, Ben He, Le Sun, Longyin Wen
Time-Causal VAE: Robust Financial Time Series Generator
Beatrice Acciaio, Stephan Eckstein, Songyan Hou
DEMONet: Underwater Acoustic Target Recognition based on Multi-Expert Network and Cross-Temporal Variational Autoencoder
Yuan Xie, Xiaowei Zhang, Jiawei Ren, Ji Xu
Variational Neural Stochastic Differential Equations with Change Points
Yousef El-Laham, Zhongchang Sun, Haibei Zhu, Tucker Balch, Svitlana Vyetrenko
$α$-TCVAE: On the relationship between Disentanglement and Diversity
Cristian Meo, Louis Mahon, Anirudh Goyal, Justin Dauwels
Analyzing Multimodal Integration in the Variational Autoencoder from an Information-Theoretic Perspective
Carlotta Langer, Yasmin Kim Georgie, Ilja Porohovoj, Verena Vanessa Hafner, Nihat Ay
Deep Compression Autoencoder for Efficient High-Resolution Diffusion Models
Junyu Chen, Han Cai, Junsong Chen, Enze Xie, Shang Yang, Haotian Tang, Muyang Li, Yao Lu, Song Han
AI-based particle track identification in scintillating fibres read out with imaging sensors
Noemi Bührer, Saúl Alonso-Monsalve, Matthew Franks, Till Dieminger, Davide Sgalaberna
Gaussian Mixture Vector Quantization with Aggregated Categorical Posterior
Mingyuan Yan, Jiawei Wu, Rushi Shah, Dianbo Liu