Diffusion Based Generative
Diffusion-based generative models synthesize data by reversing a noise-addition process, aiming to generate high-quality, diverse samples from a learned data distribution. Current research focuses on improving model efficiency and scalability across diverse data types, including images, audio, speech, point clouds, and graphs, often employing architectures like U-Nets and Transformers, and exploring techniques such as Schrödinger bridges and beta processes for improved performance. These models are significantly impacting various fields, enabling advancements in areas such as image synthesis, speech enhancement, drug discovery, and scientific simulation through their ability to generate realistic and complex data.
Papers
November 2, 2024
October 28, 2024
October 13, 2024
September 28, 2024
September 8, 2024
August 12, 2024
July 10, 2024
June 27, 2024
June 13, 2024
May 25, 2024
April 3, 2024
March 18, 2024
February 27, 2024
February 9, 2024
January 16, 2024
January 5, 2024
December 4, 2023