Variational Inference
Variational inference (VI) is a powerful family of approximate Bayesian inference methods aiming to efficiently estimate complex probability distributions, often encountered in machine learning and scientific modeling. Current research focuses on improving VI's scalability and accuracy through novel algorithms like stochastic variance reduction, amortized inference, and the use of advanced model architectures such as Gaussian processes, Bayesian neural networks, and mixture models, often within the context of specific applications like anomaly detection and inverse problems. The resulting advancements in VI are significantly impacting various fields, enabling more robust uncertainty quantification, improved model interpretability, and efficient solutions to previously intractable problems in areas ranging from 3D scene modeling to causal discovery.
Papers
On the Convergence of Black-Box Variational Inference
Kyurae Kim, Jisu Oh, Kaiwen Wu, Yi-An Ma, Jacob R. Gardner
Bayesian calibration of differentiable agent-based models
Arnau Quera-Bofarull, Ayush Chopra, Anisoara Calinescu, Michael Wooldridge, Joel Dyer
A Rigorous Link between Deep Ensembles and (Variational) Bayesian Methods
Veit David Wild, Sahra Ghalebikesabi, Dino Sejdinovic, Jeremias Knoblauch
Amortized Variational Inference with Coverage Guarantees
Yash Patel, Declan McNamara, Jackson Loper, Jeffrey Regier, Ambuj Tewari
Federated Variational Inference: Towards Improved Personalization and Generalization
Elahe Vedadi, Joshua V. Dillon, Philip Andrew Mansfield, Karan Singhal, Arash Afkanpour, Warren Richard Morningstar