Conditional Denoising Diffusion
Conditional denoising diffusion models (CDDM) are generative AI models that learn to reconstruct data by reversing a noise-addition process, conditioned on additional information. Current research focuses on applying CDDMs to diverse tasks, including image and video generation, anomaly detection, data reconstruction in various domains (e.g., medical imaging, remote sensing, audio), and even sequential recommendation, often leveraging U-Net-like architectures. The ability of CDDMs to generate high-quality, realistic data from noisy or incomplete inputs makes them a powerful tool across numerous scientific fields and practical applications, offering improvements over traditional methods in areas like image enhancement, signal processing, and data imputation.
Papers
Radio-astronomical Image Reconstruction with Conditional Denoising Diffusion Model
Mariia Drozdova, Vitaliy Kinakh, Omkar Bait, Olga Taran, Erica Lastufka, Miroslava Dessauges-Zavadsky, Taras Holotyak, Daniel Schaerer, Slava Voloshynovskiy
Zero-Shot Unsupervised and Text-Based Audio Editing Using DDPM Inversion
Hila Manor, Tomer Michaeli