Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
Papers
Disentangled Diffusion Autoencoder for Harmonization of Multi-site Neuroimaging Data
Ayodeji Ijishakin, Ana Lawry Aguila, Elizabeth Levitis, Ahmed Abdulaal, Andre Altmann, James Cole
GenDDS: Generating Diverse Driving Video Scenarios with Prompt-to-Video Generative Model
Yongjie Fu, Yunlong Li, Xuan Di
Grand canonical generative diffusion model for crystalline phases and grain boundaries
Bo Lei, Enze Chen, Hyuna Kwon, Tim Hsu, Babak Sadigh, Vincenzo Lordi, Timofey Frolov, Fei Zhou
MODULI: Unlocking Preference Generalization via Diffusion Models for Offline Multi-Objective Reinforcement Learning
Yifu Yuan, Zhenrui Zheng, Zibin Dong, Jianye Hao
Multi-Feature Aggregation in Diffusion Models for Enhanced Face Super-Resolution
Marcelo dos Santos, Rayson Laroca, Rafael O. Ribeiro, João C. Neves, David Menotti
GenRec: Unifying Video Generation and Recognition with Diffusion Models
Zejia Weng, Xitong Yang, Zhen Xing, Zuxuan Wu, Yu-Gang Jiang
Generative Inbetweening: Adapting Image-to-Video Models for Keyframe Interpolation
Xiaojuan Wang, Boyang Zhou, Brian Curless, Ira Kemelmacher-Shlizerman, Aleksander Holynski, Steven M. Seitz
DIFR3CT: Latent Diffusion for Probabilistic 3D CT Reconstruction from Few Planar X-Rays
Yiran Sun, Hana Baroudi, Tucker Netherton, Laurence Court, Osama Mawlawi, Ashok Veeraraghavan, Guha Balakrishnan
Constrained Diffusion Models via Dual Training
Shervin Khalafi, Dongsheng Ding, Alejandro Ribeiro
Diffusion Models Are Real-Time Game Engines
Dani Valevski, Yaniv Leviathan, Moab Arar, Shlomi Fruchter
TC-PDM: Temporally Consistent Patch Diffusion Models for Infrared-to-Visible Video Translation
Anh-Dzung Doan, Vu Minh Hieu Phan, Surabhi Gupta, Markus Wagner, Tat-Jun Chin, Ian Reid
FCDM: Sparse-view Sinogram Inpainting with Frequency Domain Convolution Enhanced Diffusion Models
Jiaze E, Srutarshi Banerjee, Tekin Bicer, Guannan Wang, Bin Ren
Foodfusion: A Novel Approach for Food Image Composition via Diffusion Models
Chaohua Shi, Xuan Wang, Si Shi, Xule Wang, Mingrui Zhu, Nannan Wang, Xinbo Gao
Rethinking Video Deblurring with Wavelet-Aware Dynamic Transformer and Diffusion Model
Chen Rao, Guangyuan Li, Zehua Lan, Jiakai Sun, Junsheng Luan, Wei Xing, Lei Zhao, Huaizhong Lin, Jianfeng Dong, Dalong Zhang
Training-free Long Video Generation with Chain of Diffusion Model Experts
Wenhao Li, Yichao Cao, Xiu Su, Xi Lin, Shan You, Mingkai Zheng, Yi Chen, Chang Xu