Variational Inference
Variational inference (VI) is a powerful family of approximate Bayesian inference methods aiming to efficiently estimate complex probability distributions, often encountered in machine learning and scientific modeling. Current research focuses on improving VI's scalability and accuracy through novel algorithms like stochastic variance reduction, amortized inference, and the use of advanced model architectures such as Gaussian processes, Bayesian neural networks, and mixture models, often within the context of specific applications like anomaly detection and inverse problems. The resulting advancements in VI are significantly impacting various fields, enabling more robust uncertainty quantification, improved model interpretability, and efficient solutions to previously intractable problems in areas ranging from 3D scene modeling to causal discovery.
Papers
Re-Envisioning Numerical Information Field Theory (NIFTy.re): A Library for Gaussian Processes and Variational Inference
Gordian Edenhofer, Philipp Frank, Jakob Roth, Reimar H. Leike, Massin Guerdi, Lukas I. Scheel-Platz, Matteo Guardiani, Vincent Eberle, Margret Westerkamp, Torsten A. Enßlin
Stable Training of Normalizing Flows for High-dimensional Variational Inference
Daniel Andrade
Batch and match: black-box variational inference with a score-based divergence
Diana Cai, Chirag Modi, Loucas Pillaud-Vivien, Charles C. Margossian, Robert M. Gower, David M. Blei, Lawrence K. Saul
A Framework for Variational Inference of Lightweight Bayesian Neural Networks with Heteroscedastic Uncertainties
David J. Schodt, Ryan Brown, Michael Merritt, Samuel Park, Delsin Menolascino, Mark A. Peot