Approximate Sampling
Approximate sampling methods aim to efficiently generate samples from complex probability distributions, often intractable to sample from directly, for tasks such as Bayesian inference and reinforcement learning. Current research focuses on developing robust and scalable algorithms, including adaptations of Markov Chain Monte Carlo (MCMC) methods like Langevin dynamics and Metropolis-Hastings, and exploring approximate Fisher information for active learning. These advancements are crucial for tackling high-dimensional problems in various fields, improving the efficiency and accuracy of machine learning models and statistical inference.
Papers
June 18, 2024
May 14, 2024
April 13, 2024
December 14, 2023
November 1, 2023
October 23, 2023
May 10, 2023
September 21, 2022
July 1, 2022