Prior Model
Prior models, representing pre-existing knowledge or assumptions about data, are crucial for improving the efficiency and robustness of various machine learning tasks. Current research focuses on developing adaptive and efficient methods for incorporating these priors, including techniques like hybrid ensemble Q-learning for reinforcement learning, energy-based models and diffusion models for generative tasks, and neural operator approximations for Bayesian inversion. These advancements are significantly impacting fields ranging from image processing and scientific simulation to natural language processing, enabling more accurate, data-efficient, and interpretable models.
Papers
Plug-and-Play Posterior Sampling under Mismatched Measurement and Prior Models
Marien Renaud, Jiaming Liu, Valentin de Bortoli, Andrés Almansa, Ulugbek S. Kamilov
Learning Energy-Based Prior Model with Diffusion-Amortized MCMC
Peiyu Yu, Yaxuan Zhu, Sirui Xie, Xiaojian Ma, Ruiqi Gao, Song-Chun Zhu, Ying Nian Wu