Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
Papers
InstructG2I: Synthesizing Images from Multimodal Attributed Graphs
Bowen Jin, Ziqi Pang, Bingjun Guo, Yu-Xiong Wang, Jiaxuan You, Jiawei Han
Diffusion Density Estimators
Akhil Premkumar
Jointly Generating Multi-view Consistent PBR Textures using Collaborative Control
Shimon Vainer, Konstantin Kutsy, Dante De Nigris, Ciara Rowles, Slava Elizarov, Simon Donné
Diff-FMT: Diffusion Models for Fluorescence Molecular Tomography
Qianqian Xue, Peng Zhang, Xingyu Liu, Wenjian Wang, Guanglei Zhang
Suppress Content Shift: Better Diffusion Features via Off-the-Shelf Generation Techniques
Benyuan Meng, Qianqian Xu, Zitai Wang, Zhiyong Yang, Xiaochun Cao, Qingming Huang
Decouple-Then-Merge: Towards Better Training for Diffusion Models
Qianli Ma, Xuefei Ning, Dongrui Liu, Li Niu, Linfeng Zhang
BELM: Bidirectional Explicit Linear Multi-step Sampler for Exact Inversion in Diffusion Models
Fangyikang Wang, Hubery Yin, Yuejiang Dong, Huminhao Zhu, Chao Zhang, Hanbin Zhao, Hui Qian, Chen Li
G2D2: Gradient-guided Discrete Diffusion for image inverse problem solving
Naoki Murata, Chieh-Hsin Lai, Yuhta Takida, Toshimitsu Uesaka, Bac Nguyen, Stefano Ermon, Yuki Mitsufuji
Chemistry-Inspired Diffusion with Non-Differentiable Guidance
Yuchen Shen, Chenhao Zhang, Sijie Fu, Chenghui Zhou, Newell Washburn, Barnabás Póczos
Toward Scalable Image Feature Compression: A Content-Adaptive and Diffusion-Based Approach
Sha Guo, Zhuo Chen, Yang Zhao, Ning Zhang, Xiaotong Li, Lingyu Duan
Sparse Repellency for Shielded Generation in Text-to-image Diffusion Models
Michael Kirchhof, James Thornton, Pierre Ablin, Louis Béthune, Eugene Ndiaye, Marco Cuturi
TIMBA: Time series Imputation with Bi-directional Mamba Blocks and Diffusion models
Javier Solís-García, Belén Vega-Márquez, Juan A. Nepomuceno, Isabel A. Nepomuceno-Chamorro
Training-free Diffusion Model Alignment with Sampling Demons
Po-Hung Yeh, Kuang-Huei Lee, Jun-Cheng Chen
TweedieMix: Improving Multi-Concept Fusion for Diffusion-based Image/Video Generation
Gihyun Kwon, Jong Chul Ye
Continuous Ensemble Weather Forecasting with Diffusion models
Martin Andrae, Tomas Landelius, Joel Oskarsson, Fredrik Lindsten
SePPO: Semi-Policy Preference Optimization for Diffusion Alignment
Daoan Zhang, Guangchen Lan, Dong-Jun Han, Wenlin Yao, Xiaoman Pan, Hongming Zhang, Mingxiao Li, Pengcheng Chen, Yu Dong, Christopher Brinton, Jiebo Luo
DiffuseReg: Denoising Diffusion Model for Obtaining Deformation Fields in Unsupervised Deformable Image Registration
Yongtai Zhuo, Yiqing Shen
Leveraging Multimodal Diffusion Models to Accelerate Imaging with Side Information
Timofey Efimov, Harry Dong, Megna Shah, Jeff Simmons, Sean Donegan, Yuejie Chi
Human-Feedback Efficient Reinforcement Learning for Online Diffusion Model Finetuning
Ayano Hiranaka, Shang-Fu Chen, Chieh-Hsin Lai, Dongjun Kim, Naoki Murata, Takashi Shibuya, Wei-Hsiang Liao, Shao-Hua Sun, Yuki Mitsufuji
Low-Rank Continual Personalization of Diffusion Models
Łukasz Staniszewski, Katarzyna Zaleska, Kamil Deja