Diffusion Based Generative Model
Diffusion-based generative models are a class of powerful AI models that create new data samples by reversing a diffusion process, gradually removing noise from random data until a realistic sample is obtained. Current research focuses on improving model efficiency and control, exploring architectures like transformers and UNets, and incorporating various conditioning mechanisms (e.g., text, depth maps, physical priors) to guide the generation process. These models are significantly impacting diverse fields, enabling advancements in image and video editing, speech enhancement, medical imaging, and even extreme video compression through their ability to generate high-quality, diverse, and often physically plausible data.
Papers
March 18, 2024
March 15, 2024
March 8, 2024
February 14, 2024
December 22, 2023
December 15, 2023
December 5, 2023
November 29, 2023
November 22, 2023
November 12, 2023
October 5, 2023
September 30, 2023
September 19, 2023
September 14, 2023
July 21, 2023
July 12, 2023
June 29, 2023
June 26, 2023
June 14, 2023