Variational Inference
Variational inference (VI) is a powerful family of approximate Bayesian inference methods aiming to efficiently estimate complex probability distributions, often encountered in machine learning and scientific modeling. Current research focuses on improving VI's scalability and accuracy through novel algorithms like stochastic variance reduction, amortized inference, and the use of advanced model architectures such as Gaussian processes, Bayesian neural networks, and mixture models, often within the context of specific applications like anomaly detection and inverse problems. The resulting advancements in VI are significantly impacting various fields, enabling more robust uncertainty quantification, improved model interpretability, and efficient solutions to previously intractable problems in areas ranging from 3D scene modeling to causal discovery.
Papers
Concurrent Training and Layer Pruning of Deep Neural Networks
Valentin Frank Ingmar Guenter, Athanasios Sideris
Regularized KL-Divergence for Well-Defined Function-Space Variational Inference in Bayesian neural networks
Tristan Cinquin, Robert Bamler
Theoretical Guarantees for Variational Inference with Fixed-Variance Mixture of Gaussians
Tom Huix, Anna Korba, Alain Durmus, Eric Moulines
Posterior and variational inference for deep neural networks with heavy-tailed weights
Ismaël Castillo, Paul Egels
Variational Pseudo Marginal Methods for Jet Reconstruction in Particle Physics
Hanming Yang, Antonio Khalil Moretti, Sebastian Macaluso, Philippe Chlenski, Christian A. Naesseth, Itsik Pe'er
You Only Accept Samples Once: Fast, Self-Correcting Stochastic Variational Inference
Dominic B. Dayta