Variational Inference
Variational inference (VI) is a powerful family of approximate Bayesian inference methods aiming to efficiently estimate complex probability distributions, often encountered in machine learning and scientific modeling. Current research focuses on improving VI's scalability and accuracy through novel algorithms like stochastic variance reduction, amortized inference, and the use of advanced model architectures such as Gaussian processes, Bayesian neural networks, and mixture models, often within the context of specific applications like anomaly detection and inverse problems. The resulting advancements in VI are significantly impacting various fields, enabling more robust uncertainty quantification, improved model interpretability, and efficient solutions to previously intractable problems in areas ranging from 3D scene modeling to causal discovery.
Papers
Revised Regularization for Efficient Continual Learning through Correlation-Based Parameter Update in Bayesian Neural Networks
Sanchar Palit, Biplab Banerjee, Subhasis Chaudhuri
Variational Autoencoders for Efficient Simulation-Based Inference
Mayank Nautiyal, Andrey Shternshis, Andreas Hellander, Prashant Singh
LazyDINO: Fast, scalable, and efficiently amortized Bayesian inversion via structure-exploiting and surrogate-driven measure transport
Lianghao Cao, Joshua Chen, Michael Brennan, Thomas O'Leary-Roseberry, Youssef Marzouk, Omar Ghattas
C$^{2}$INet: Realizing Incremental Trajectory Prediction with Prior-Aware Continual Causal Intervention
Xiaohe Li, Feilong Huang, Zide Fan, Fangli Mou, Leilei Lin, Yingyan Hou, Lijie Wen