Posterior Sampling
Posterior sampling aims to efficiently generate samples from a probability distribution representing the posterior belief about unknown parameters given observed data. Current research focuses on improving the efficiency and accuracy of posterior sampling, particularly within the context of high-dimensional data and complex models, employing techniques like diffusion models, normalizing flows, and Langevin dynamics. These advancements are impacting diverse fields, including image processing, Bayesian inverse problems, and reinforcement learning, by enabling more robust and efficient inference in challenging scenarios. The development of computationally tractable algorithms for posterior sampling is crucial for advancing these areas.
Papers
Plug-and-Play Posterior Sampling under Mismatched Measurement and Prior Models
Marien Renaud, Jiaming Liu, Valentin de Bortoli, Andrés Almansa, Ulugbek S. Kamilov
Learning Energy-Based Prior Model with Diffusion-Amortized MCMC
Peiyu Yu, Yaxuan Zhu, Sirui Xie, Xiaojian Ma, Ruiqi Gao, Song-Chun Zhu, Ying Nian Wu