Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
Papers
Diff-PIC: Revolutionizing Particle-In-Cell Nuclear Fusion Simulation with Diffusion Models
Chuan Liu, Chunshu Wu, Shihui Cao, Mingkai Chen, James Chenhao Liang, Ang Li, Michael Huang, Chuang Ren, Dongfang Liu, Ying Nian Wu, Tong Geng
SkyDiffusion: Ground-to-Aerial Image Synthesis with Diffusion Models and BEV Paradigm
Junyan Ye, Jun He, Weijia Li, Zhutao Lv, Yi Lin, Jinhua Yu, Haote Yang, Conghui He
Optimizing Diffusion Models for Joint Trajectory Prediction and Controllable Generation
Yixiao Wang, Chen Tang, Lingfeng Sun, Simone Rossi, Yichen Xie, Chensheng Peng, Thomas Hannagan, Stefano Sabatini, Nicola Poerio, Masayoshi Tomizuka, Wei Zhan
Smoothed Energy Guidance: Guiding Diffusion Models with Reduced Energy Curvature of Attention
Susung Hong
TurboEdit: Text-Based Image Editing Using Few-Step Diffusion Models
Gilad Deutch, Rinon Gal, Daniel Garibi, Or Patashnik, Daniel Cohen-Or
A Simple Background Augmentation Method for Object Detection with Diffusion Model
Yuhang Li, Xin Dong, Chen Chen, Weiming Zhuang, Lingjuan Lyu
Hierarchical Conditioning of Diffusion Models Using Tree-of-Life for Studying Species Evolution
Mridul Khurana, Arka Daw, M. Maruf, Josef C. Uyeda, Wasila Dahdul, Caleb Charpentier, Yasin Bakış, Henry L. Bart, Paula M. Mabee, Hilmar Lapp, James P. Balhoff, Wei-Lun Chao, Charles Stewart, Tanya Berger-Wolf, Anuj Karpatne
Generative Learning of the Solution of Parametric Partial Differential Equations Using Guided Diffusion Models and Virtual Observations
Han Gao, Sebastian Kaltenbach, Petros Koumoutsakos
Detecting, Explaining, and Mitigating Memorization in Diffusion Models
Yuxin Wen, Yuchen Liu, Chen Chen, Lingjuan Lyu
Deformable 3D Shape Diffusion Model
Dengsheng Chen, Jie Hu, Xiaoming Wei, Enhua Wu
Diff-Cleanse: Identifying and Mitigating Backdoor Attacks in Diffusion Models
Jiang Hao, Xiao Jin, Hu Xiaoguang, Chen Tianyou
State-observation augmented diffusion model for nonlinear assimilation
Zhuoyuan Li, Bin Dong, Pingwen Zhang
Learning Feature-Preserving Portrait Editing from Generated Pairs
Bowei Chen, Tiancheng Zhi, Peihao Zhu, Shen Sang, Jing Liu, Linjie Luo
Specify and Edit: Overcoming Ambiguity in Text-Based Image Editing
Ekaterina Iakovleva, Fabio Pizzati, Philip Torr, Stéphane Lathuilière
LatentArtiFusion: An Effective and Efficient Histological Artifacts Restoration Framework
Zhenqi He, Wenrui Liu, Minghao Yin, Kai Han
FedDEO: Description-Enhanced One-Shot Federated Learning with Diffusion Models
Mingzhao Yang, Shangchao Su, Bin Li, Xiangyang Xue