Parallel Sampling

Parallel sampling aims to accelerate the generation of samples from complex probability distributions, a crucial task in various fields like machine learning and scientific computing. Current research focuses on improving the efficiency of parallel sampling within diffusion models and autoregressive models, employing techniques like Picard iterations and contrastive training to mitigate issues such as poor out-of-distribution performance. These advancements offer significant potential for speeding up inference in generative models and Bayesian inference, leading to more efficient and scalable algorithms for diverse applications.

Papers