Amortised Inference
Amortized inference aims to speed up Bayesian inference, a computationally expensive process crucial for many scientific applications, by pre-computing a model that rapidly approximates posterior distributions for new data. Current research focuses on leveraging neural networks to learn these fast approximations, applying this approach to various tasks including Bayesian neural networks, optimal experimental design, and scientific simulation. This accelerates inference across diverse fields, enabling efficient analysis of large datasets and facilitating real-time applications where traditional methods are too slow, ultimately improving the scalability and practicality of Bayesian methods.
Papers
September 6, 2024
April 18, 2024
February 4, 2024
November 2, 2023
October 24, 2023
September 6, 2023
July 19, 2023
May 24, 2023