Diffusion Based Model
Diffusion-based models are a class of generative models that create new data samples by reversing a noise-addition process. Current research focuses on improving efficiency, particularly reducing the number of network evaluations needed for sample generation, and enhancing control over the generation process through techniques like conditional generation and incorporating prior knowledge (e.g., using Gaussian processes or state-space models). These models are proving valuable across diverse fields, including image synthesis, time series forecasting, and medical image generation, offering improvements in both the quality and speed of data generation compared to previous methods. Their impact stems from the ability to generate high-fidelity synthetic data for various applications where real data is scarce or expensive to acquire.
Papers
Mamba-ST: State Space Model for Efficient Style Transfer
Filippo Botti, Alex Ergasti, Leonardo Rossi, Tomaso Fontanini, Claudio Ferrari, Massimo Bertozzi, Andrea Prati
Cross-modality image synthesis from TOF-MRA to CTA using diffusion-based models
Alexander Koch, Orhun Utku Aydin, Adam Hilbert, Jana Rieger, Satoru Tanioka, Fujimaro Ishida, Dietmar Frey