Diffusion Based Generative Model
Diffusion-based generative models are a class of powerful AI models that create new data samples by reversing a diffusion process, gradually removing noise from random data until a realistic sample is obtained. Current research focuses on improving model efficiency and control, exploring architectures like transformers and UNets, and incorporating various conditioning mechanisms (e.g., text, depth maps, physical priors) to guide the generation process. These models are significantly impacting diverse fields, enabling advancements in image and video editing, speech enhancement, medical imaging, and even extreme video compression through their ability to generate high-quality, diverse, and often physically plausible data.
Papers
November 14, 2024
October 16, 2024
October 4, 2024
September 23, 2024
September 12, 2024
September 8, 2024
September 1, 2024
July 26, 2024
July 10, 2024
July 7, 2024
June 18, 2024
June 13, 2024
June 12, 2024
May 31, 2024
May 23, 2024
May 22, 2024
May 11, 2024
May 2, 2024
April 4, 2024
April 2, 2024