Posterior Sampling
Posterior sampling aims to efficiently generate samples from a probability distribution representing the posterior belief about unknown parameters given observed data. Current research focuses on improving the efficiency and accuracy of posterior sampling, particularly within the context of high-dimensional data and complex models, employing techniques like diffusion models, normalizing flows, and Langevin dynamics. These advancements are impacting diverse fields, including image processing, Bayesian inverse problems, and reinforcement learning, by enabling more robust and efficient inference in challenging scenarios. The development of computationally tractable algorithms for posterior sampling is crucial for advancing these areas.
Papers
Bayesian Neural Networks with Domain Knowledge Priors
Dylan Sam, Rattana Pukdee, Daniel P. Jeong, Yewon Byun, J. Zico Kolter
Incentivized Exploration via Filtered Posterior Sampling
Anand Kalvit, Aleksandrs Slivkins, Yonatan Gur
Diffusion Posterior Sampling is Computationally Intractable
Shivam Gupta, Ajil Jalal, Aditya Parulekar, Eric Price, Zhiyang Xun
Plug-and-Play Posterior Sampling under Mismatched Measurement and Prior Models
Marien Renaud, Jiaming Liu, Valentin de Bortoli, Andrés Almansa, Ulugbek S. Kamilov
Learning Energy-Based Prior Model with Diffusion-Amortized MCMC
Peiyu Yu, Yaxuan Zhu, Sirui Xie, Xiaojian Ma, Ruiqi Gao, Song-Chun Zhu, Ying Nian Wu