Diffusion Based Model
Diffusion-based models are a class of generative models that create new data samples by reversing a noise-addition process. Current research focuses on improving efficiency, particularly reducing the number of network evaluations needed for sample generation, and enhancing control over the generation process through techniques like conditional generation and incorporating prior knowledge (e.g., using Gaussian processes or state-space models). These models are proving valuable across diverse fields, including image synthesis, time series forecasting, and medical image generation, offering improvements in both the quality and speed of data generation compared to previous methods. Their impact stems from the ability to generate high-fidelity synthetic data for various applications where real data is scarce or expensive to acquire.
Papers
Memory Triggers: Unveiling Memorization in Text-To-Image Generative Models through Word-Level Duplication
Ali Naseh, Jaechul Roh, Amir Houmansadr
WarpDiffusion: Efficient Diffusion Model for High-Fidelity Virtual Try-on
xujie zhang, Xiu Li, Michael Kampffmeyer, Xin Dong, Zhenyu Xie, Feida Zhu, Haoye Dong, Xiaodan Liang