Parallel Sampling
Parallel sampling aims to accelerate the generation of samples from complex probability distributions, a crucial task in various fields like machine learning and scientific computing. Current research focuses on improving the efficiency of parallel sampling within diffusion models and autoregressive models, employing techniques like Picard iterations and contrastive training to mitigate issues such as poor out-of-distribution performance. These advancements offer significant potential for speeding up inference in generative models and Bayesian inference, leading to more efficient and scalable algorithms for diverse applications.
Papers
October 30, 2024
October 28, 2024
October 10, 2024
August 18, 2024
July 12, 2024
July 1, 2024
June 3, 2024
May 24, 2024
April 26, 2024
February 15, 2024
November 28, 2023
November 20, 2023
May 30, 2023
May 25, 2023