Variational Posterior
Variational posterior methods approximate complex probability distributions, crucial for Bayesian inference in scenarios where exact computation is intractable. Current research focuses on improving the accuracy and efficiency of these approximations, particularly within deep learning models, employing techniques like normalizing flows, Gaussian processes, and variational autoencoders to create more flexible and expressive posterior representations. These advancements are significant for various applications, including phylogenetic inference, continual learning, and image restoration, by enabling more robust uncertainty quantification and improved model performance in data-scarce or computationally demanding settings.
Papers
May 5, 2023
March 9, 2023
January 9, 2023
September 29, 2022
July 22, 2022
July 8, 2022
May 23, 2022
May 16, 2022
May 12, 2022
May 10, 2022
April 1, 2022
December 29, 2021