Invariant Posterior
Invariant posteriors in Bayesian inference aim to develop probability distributions over model parameters that remain unchanged under specific transformations, such as permutations or scaling, improving robustness and generalization. Current research focuses on applying this concept to neural networks, particularly Bayesian neural networks (BNNs), and Bayesian quadrature, leveraging techniques like variational inference and specialized prior distributions to achieve invariance. This work addresses challenges like multimodality in BNN posteriors and scale-dependent generalization bounds, leading to more reliable uncertainty quantification and improved performance in various applications, including domain generalization and numerical integration.