Score Based Model
Score-based generative models learn the gradient of a data distribution (the "score") to generate new samples by reversing a diffusion process that gradually adds noise. Current research focuses on improving training efficiency by reducing gradient variance, enhancing model robustness to noisy data, and developing novel architectures like those incorporating Wasserstein proximal operators or subspace projections to improve sample quality and computational speed. These models are proving highly effective in diverse applications, including image generation, inverse problems (like medical imaging reconstruction), and even quantum system modeling, demonstrating their broad significance across scientific disciplines.
Papers
November 30, 2022
September 26, 2022
September 2, 2022