Diffusion Based Generative

Diffusion-based generative models synthesize data by reversing a noise-addition process, aiming to generate high-quality, diverse samples from a learned data distribution. Current research focuses on improving model efficiency and scalability across diverse data types, including images, audio, speech, point clouds, and graphs, often employing architectures like U-Nets and Transformers, and exploring techniques such as Schrödinger bridges and beta processes for improved performance. These models are significantly impacting various fields, enabling advancements in areas such as image synthesis, speech enhancement, drug discovery, and scientific simulation through their ability to generate realistic and complex data.

Papers