Diffusion Based Generative
Diffusion-based generative models synthesize data by reversing a noise-addition process, aiming to generate high-quality, diverse samples from a learned data distribution. Current research focuses on improving model efficiency and scalability across diverse data types, including images, audio, speech, point clouds, and graphs, often employing architectures like U-Nets and Transformers, and exploring techniques such as Schrödinger bridges and beta processes for improved performance. These models are significantly impacting various fields, enabling advancements in areas such as image synthesis, speech enhancement, drug discovery, and scientific simulation through their ability to generate realistic and complex data.
Papers
November 15, 2023
November 13, 2023
November 12, 2023
October 19, 2023
September 20, 2023
June 15, 2023
April 22, 2023
February 4, 2023
January 31, 2023
November 2, 2022
October 27, 2022
October 26, 2022
September 29, 2022
August 11, 2022