Bayesian Inverse Problem
Bayesian inverse problems aim to infer unknown parameters from noisy observations using probabilistic models, quantifying uncertainty inherent in the process. Current research heavily utilizes generative models, particularly diffusion models and normalizing flows, often coupled with Markov Chain Monte Carlo (MCMC) methods or variational inference for efficient posterior sampling. These advancements are improving the accuracy and efficiency of solving high-dimensional, nonlinear inverse problems across diverse fields, including medical imaging, materials science, and geophysical modeling, leading to more reliable and robust solutions. Furthermore, research focuses on developing robust and efficient methods that handle limited data, noisy inputs, and model uncertainties.
Papers
Taming Score-Based Diffusion Priors for Infinite-Dimensional Nonlinear Inverse Problems
Lorenzo Baldassari, Ali Siahkoohi, Josselin Garnier, Knut Solna, Maarten V. de Hoop
Reducing the cost of posterior sampling in linear inverse problems via task-dependent score learning
Fabian Schneider, Duc-Lam Duong, Matti Lassas, Maarten V. de Hoop, Tapio Helin
Bayesian Inverse Problems with Conditional Sinkhorn Generative Adversarial Networks in Least Volume Latent Spaces
Qiuyi Chen, Panagiotis Tsilifis, Mark Fuge
Optimized Linear Measurements for Inverse Problems using Diffusion-Based Image Generation
Ling-Qi Zhang, Zahra Kadkhodaie, Eero P. Simoncelli, David H. Brainard
Learning Diffusion Priors from Observations by Expectation Maximization
François Rozet, Gérôme Andry, François Lanusse, Gilles Louppe