Diffusion Explainer
Diffusion explainers are generative models that leverage the principles of diffusion processes to create new data samples, primarily images and other high-dimensional data, by reversing a noise-addition process. Current research focuses on improving efficiency (e.g., one-step diffusion), enhancing controllability (e.g., through classifier-free guidance and conditioning on various modalities like text and 3D priors), and addressing challenges like data replication and mode collapse. These advancements are impacting diverse fields, from image super-resolution and medical imaging to robotics, recommendation systems, and even scientific simulations, by providing powerful tools for data generation, manipulation, and analysis.
Papers
One-Step Diffusion Policy: Fast Visuomotor Policies via Diffusion Distillation
Zhendong Wang, Zhaoshuo Li, Ajay Mandlekar, Zhenjia Xu, Jiaojiao Fan, Yashraj Narang, Linxi Fan, Yuke Zhu, Yogesh Balaji, Mingyuan Zhou, Ming-Yu Liu, Yu Zeng
EEG-Driven 3D Object Reconstruction with Color Consistency and Diffusion Prior
Xin Xiang, Wenhui Zhou, Guojun Dai
Generative Simulations of The Solar Corona Evolution With Denoising Diffusion : Proof of Concept
Grégoire Francisco, Francesco Pio Ramunno, Manolis K. Georgoulis, João Fernandes, Teresa Barata, Dario Del Moro
Error estimates between SGD with momentum and underdamped Langevin diffusion
Arnaud Guillin (LMBP), Yu Wang, Lihu Xu, Haoran Yang
DiffusionSeeder: Seeding Motion Optimization with Diffusion for Rapid Motion Planning
Huang Huang, Balakumar Sundaralingam, Arsalan Mousavian, Adithyavairavan Murali, Ken Goldberg, Dieter Fox