Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
Papers
Mojito: Motion Trajectory and Intensity Control for Video Generation
Xuehai He, Shuohang Wang, Jianwei Yang, Xiaoxia Wu, Yiping Wang, Kuan Wang, Zheng Zhan, Olatunji Ruwase, Yelong Shen, Xin Eric Wang
Complex-Cycle-Consistent Diffusion Model for Monaural Speech Enhancement
Yi Li, Yang Sun, Plamen Angelov
Generative Modeling with Explicit Memory
Yi Tang, Peng Sun, Zhenglin Cheng, Tao Lin
DMin: Scalable Training Data Influence Estimation for Diffusion Models
Huawei Lin, Yingjie Lao, Weijie Zhao
InvDiff: Invariant Guidance for Bias Mitigation in Diffusion Models
Min Hou, Yueying Wu, Chang Xu, Yu-Hao Huang, Chenxi Bai, Le Wu, Jiang Bian
Grasp Diffusion Network: Learning Grasp Generators from Partial Point Clouds with Diffusion Models in SO(3)xR3
Joao Carvalho, An T. Le, Philipp Jahr, Qiao Sun, Julen Urain, Dorothea Koert, Jan Peters
Video Summarization using Denoising Diffusion Probabilistic Model
Zirui Shang, Yubo Zhu, Hongxi Li, Shuo yang, Xinxiao Wu
Self-Refining Diffusion Samplers: Enabling Parallelization via Parareal Iterations
Nikil Roashan Selvam, Amil Merchant, Stefano Ermon
Non-Normal Diffusion Models
Henry Li
Score-Optimal Diffusion Schedules
Christopher Williams, Andrew Campbell, Arnaud Doucet, Saifuddin Syed
Motion Artifact Removal in Pixel-Frequency Domain via Alternate Masks and Diffusion Model
Jiahua Xu, Dawei Zhou, Lei Hu, Jianfeng Guo, Feng Yang, Zaiyi Liu, Nannan Wang, Xinbo Gao
DiffSensei: Bridging Multi-Modal LLMs and Diffusion Models for Customized Manga Generation
Jianzong Wu, Chao Tang, Jingbo Wang, Yanhong Zeng, Xiangtai Li, Yunhai Tong
Parallel simulation for sampling under isoperimetry and score-based diffusion models
Huanjian Zhou, Masashi Sugiyama
RAP-SR: RestorAtion Prior Enhancement in Diffusion Models for Realistic Image Super-Resolution
Jiangang Wang, Qingnan Fan, Jinwei Chen, Hong Gu, Feng Huang, Wenqi Ren
FIRE: Robust Detection of Diffusion-Generated Images via Frequency-Guided Reconstruction Error
Beilin Chu, Xuan Xu, Xin Wang, Yufei Zhang, Weike You, Linna Zhou
Diffusing Differentiable Representations
Yash Savani, Marc Finzi, J. Zico Kolter
Improving Source Extraction with Diffusion and Consistency Models
Tornike Karchkhadze, Mohammad Rasool Izadi, Shuo Zhang
Diff5T: Benchmarking Human Brain Diffusion MRI with an Extensive 5.0 Tesla K-Space and Spatial Dataset
Shanshan Wang, Shoujun Yu, Jian Cheng, Sen Jia, Changjun Tie, Jiayu Zhu, Haohao Peng, Yijing Dong, Jianzhong He, Fan Zhang, Yaowen Xing, Xiuqin Jia, Qi Yang, Qiyuan Tian, Hua Guo, Guobin Li, Hairong Zheng
See Further When Clear: Curriculum Consistency Model
Yunpeng Liu, Boxiao Liu, Yi Zhang, Xingzhong Hou, Guanglu Song, Yu Liu, Haihang You
Generating floorplans for various building functionalities via latent diffusion model
Mohamed R. Ibrahim, Josef Musil, Irene Gallou