Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
Papers
EnergyMoGen: Compositional Human Motion Generation with Energy-Based Diffusion Model in Latent Space
Jianrong Zhang, Hehe Fan, Yi Yang
Qua$^2$SeDiMo: Quantifiable Quantization Sensitivity of Diffusion Models
Keith G. Mills, Mohammad Salameh, Ruichen Chen, Negar Hassanpour, Wei Lu, Di Niu
DiffSim: Taming Diffusion Models for Evaluating Visual Similarity
Yiren Song, Xiaokang Liu, Mike Zheng Shou
Enhancing Diffusion Models for High-Quality Image Generation
Jaineet Shah, Michael Gromis, Rickston Pinto
PixelMan: Consistent Object Editing with Diffusion Models via Pixel Manipulation and Generation
Liyao Jiang, Negar Hassanpour, Mohammad Salameh, Mohammadreza Samadi, Jiao He, Fengyu Sun, Di Niu
SurgSora: Decoupled RGBD-Flow Diffusion Model for Controllable Surgical Video Generation
Tong Chen, Shuya Yang, Junyi Wang, Long Bai, Hongliang Ren, Luping Zhou
IDEQ: an improved diffusion model for the TSP
Mickael Basson, Philippe Preux
TAUDiff: Improving statistical downscaling for extreme weather events using generative diffusion models
Rahul Sundar, Nishant Parashar, Antoine Blanchard, Boyko Dodov
Attentive Eraser: Unleashing Diffusion Model's Object Removal Potential via Self-Attention Redirection Guidance
Wenhao Sun, Benlei Cui, Jingqun Tang, Xue-Mei Dong
Rethinking Diffusion-Based Image Generators for Fundus Fluorescein Angiography Synthesis on Limited Data
Chengzhou Yu (South China University of Technology), Huihui Fang (Pazhou Laboratory), Hongqiu Wang (The Hong Kong University of Science and Technology (Guangzhou)), Ting Deng (South China University of Technology), Qing Du (South China University of Technology), Yanwu Xu (South China University of Technology), Weihua Yang (Shenzhen Eye Hospital)
Towards a Training Free Approach for 3D Scene Editing
Vivek Madhavaram, Shivangana Rawat, Chaitanya Devaguptapu, Charu Sharma, Manohar Kaul
Consistent Diffusion: Denoising Diffusion Model with Data-Consistent Training for Image Restoration
Xinlong Cheng, Tiantian Cao, Guoan Cheng, Bangxuan Huang, Xinghan Tian, Ye Wang, Xiaoyu He, Weixin Li, Tianfan Xue, Xuan Dong
Causal Diffusion Transformers for Generative Modeling
Chaorui Deng, Deyao Zh, Kunchang Li, Shi Guan, Haoqi Fan
CAP4D: Creating Animatable 4D Portrait Avatars with Morphable Multi-View Diffusion Models
Felix Taubner, Ruihang Zhang, Mathieu Tuli, David B. Lindell
LineArt: A Knowledge-guided Training-free High-quality Appearance Transfer for Design Drawing with Diffusion Model
Xi Wang, Hongzhen Li, Heng Fang, Yichen Peng, Haoran Xie, Xi Yang, Chuntao Li
IGR: Improving Diffusion Model for Garment Restoration from Person Image
Le Shen, Rong Huang, Zhijie Wang
Nearly Zero-Cost Protection Against Mimicry by Personalized Diffusion Models
Namhyuk Ahn, KiYoon Yoo, Wonhyuk Ahn, Daesik Kim, Seung-Hun Nam
Segment-Level Diffusion: A Framework for Controllable Long-Form Generation with Diffusion Language Models
Xiaochen Zhu, Georgi Karadzhov, Chenxi Whitehouse, Andreas Vlachos
VividFace: A Diffusion-Based Hybrid Framework for High-Fidelity Video Face Swapping
Hao Shao, Shulun Wang, Yang Zhou, Guanglu Song, Dailan He, Shuo Qin, Zhuofan Zong, Bingqi Ma, Yu Liu, Hongsheng Li
Understanding and Mitigating Memorization in Diffusion Models for Tabular Data
Zhengyu Fang, Zhimeng Jiang, Huiyuan Chen, Xiao Li, Jing Li