Variational Inference
Variational inference (VI) is a powerful family of approximate Bayesian inference methods aiming to efficiently estimate complex probability distributions, often encountered in machine learning and scientific modeling. Current research focuses on improving VI's scalability and accuracy through novel algorithms like stochastic variance reduction, amortized inference, and the use of advanced model architectures such as Gaussian processes, Bayesian neural networks, and mixture models, often within the context of specific applications like anomaly detection and inverse problems. The resulting advancements in VI are significantly impacting various fields, enabling more robust uncertainty quantification, improved model interpretability, and efficient solutions to previously intractable problems in areas ranging from 3D scene modeling to causal discovery.
Papers
Period VITS: Variational Inference with Explicit Pitch Modeling for End-to-end Emotional Speech Synthesis
Yuma Shirahata, Ryuichi Yamamoto, Eunwoo Song, Ryo Terashima, Jae-Min Kim, Kentaro Tachibana
DPVIm: Differentially Private Variational Inference Improved
Joonas Jälkö, Lukas Prediger, Antti Honkela, Samuel Kaski
On the optimization and pruning for Bayesian deep learning
Xiongwen Ke, Yanan Fan
GFlowOut: Dropout with Generative Flow Networks
Dianbo Liu, Moksh Jain, Bonaventure Dossou, Qianli Shen, Salem Lahlou, Anirudh Goyal, Nikolay Malkin, Chris Emezue, Dinghuai Zhang, Nadhir Hassen, Xu Ji, Kenji Kawaguchi, Yoshua Bengio
Differentially private partitioned variational inference
Mikko A. Heikkilä, Matthew Ashman, Siddharth Swaroop, Richard E. Turner, Antti Honkela
A Unified Perspective on Natural Gradient Variational Inference with Gaussian Mixture Models
Oleg Arenz, Philipp Dahlinger, Zihan Ye, Michael Volpp, Gerhard Neumann