Approximate Posterior
Approximate posterior methods aim to efficiently estimate probability distributions representing uncertainty in complex models, often circumventing the intractability of exact Bayesian inference. Current research focuses on improving the accuracy and efficiency of these approximations, particularly within the contexts of diffusion models, Bayesian neural networks, and reinforcement learning, employing techniques like Laplace approximations, Langevin dynamics, and dropout. These advancements are crucial for reliable uncertainty quantification in diverse applications, ranging from inverse problems in imaging to robust decision-making in autonomous systems and improved model calibration for out-of-domain detection.
Papers
November 4, 2024
October 4, 2024
July 16, 2024
June 5, 2024
May 29, 2024
May 11, 2024
April 11, 2024
September 9, 2023
June 2, 2023
November 11, 2022
November 7, 2022
October 17, 2022
September 14, 2022