Diffusion Explainer
Diffusion explainers are generative models that leverage the principles of diffusion processes to create new data samples, primarily images and other high-dimensional data, by reversing a noise-addition process. Current research focuses on improving efficiency (e.g., one-step diffusion), enhancing controllability (e.g., through classifier-free guidance and conditioning on various modalities like text and 3D priors), and addressing challenges like data replication and mode collapse. These advancements are impacting diverse fields, from image super-resolution and medical imaging to robotics, recommendation systems, and even scientific simulations, by providing powerful tools for data generation, manipulation, and analysis.
Papers
DiffAugment: Diffusion based Long-Tailed Visual Relationship Recognition
Parul Gupta, Tuan Nguyen, Abhinav Dhall, Munawar Hayat, Trung Le, Thanh-Toan Do
GD^2-NeRF: Generative Detail Compensation via GAN and Diffusion for One-shot Generalizable Neural Radiance Fields
Xiao Pan, Zongxin Yang, Shuai Bai, Yi Yang
Unified framework for diffusion generative models in SO(3): applications in computer vision and astrophysics
Yesukhei Jagvaral, Francois Lanusse, Rachel Mandelbaum
Bayesian ECG reconstruction using denoising diffusion generative models
Gabriel V. Cardoso, Lisa Bedin, Josselin Duchateau, Rémi Dubois, Eric Moulines