Functional Prior

Functional priors are increasingly used in Bayesian machine learning to incorporate prior knowledge about the shape or properties of a function into model training, improving efficiency and accuracy. Current research focuses on integrating functional priors with various models, including Bayesian neural networks and Gaussian processes, often employing techniques like anchored ensembling, amortized active learning, and Laplace approximations for efficient inference. This approach enhances the performance of Bayesian optimization, active learning, and uncertainty quantification in diverse applications such as materials science, time series analysis, and personalized medicine, particularly where data is scarce or expensive to acquire. The resulting improved model accuracy and uncertainty estimates lead to more reliable predictions and better decision-making in these fields.

Papers