Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
Papers
Masked Diffusion Models are Secretly Time-Agnostic Masked Models and Exploit Inaccurate Categorical Sampling
Kaiwen Zheng, Yongxin Chen, Hanzi Mao, Ming-Yu Liu, Jun Zhu, Qinsheng Zhang
StyleTokenizer: Defining Image Style by a Single Instance for Controlling Diffusion Models
Wen Li, Muyuan Fang, Cheng Zou, Biao Gong, Ruobing Zheng, Meng Wang, Jingdong Chen, Ming Yang
Diffusion Models Learn Low-Dimensional Distributions via Subspace Clustering
Peng Wang, Huijie Zhang, Zekai Zhang, Siyi Chen, Yi Ma, Qing Qu
SketcherX: AI-Driven Interactive Robotic drawing with Diffusion model and Vectorization Techniques
Jookyung Song, Mookyoung Kang, Nojun Kwak
Exploring Low-Dimensional Subspaces in Diffusion Models for Controllable Image Editing
Siyi Chen, Huijie Zhang, Minzhe Guo, Yifu Lu, Peng Wang, Qing Qu
Enhancing Sample Efficiency and Exploration in Reinforcement Learning through the Integration of Diffusion Models and Proximal Policy Optimization
Gao Tianci, Dmitriev D. Dmitry, Konstantin A. Neusypin, Yang Bo, Rao Shengren
A Financial Time Series Denoiser Based on Diffusion Model
Zhuohan Wang, Carmine Ventre
SPDiffusion: Semantic Protection Diffusion for Multi-concept Text-to-image Generation
Yang Zhang, Rui Zhang, Xuecheng Nie, Haochen Li, Jikun Chen, Yifan Hao, Xin Zhang, Luoqi Liu, Ling Li
DPDEdit: Detail-Preserved Diffusion Models for Multimodal Fashion Image Editing
Xiaolong Wang, Zhi-Qi Cheng, Jue Wang, Xiaojiang Peng
3D Priors-Guided Diffusion for Blind Face Restoration
Xiaobin Lu, Xiaobin Hu, Jun Luo, Ben Zhu, Yaping Ruan, Wenqi Ren
Accurate Compression of Text-to-Image Diffusion Models via Vector Quantization
Vage Egiazarian, Denis Kuznedelev, Anton Voronov, Ruslan Svirschevski, Michael Goin, Daniil Pavlov, Dan Alistarh, Dmitry Baranchuk
Towards understanding Diffusion Models (on Graphs)
Solveig Klepper
LightPure: Realtime Adversarial Image Purification for Mobile Devices Using Diffusion Models
Hossein Khalili, Seongbin Park, Vincent Li, Brandan Bright, Ali Payani, Ramana Rao Kompella, Nader Sehatbakhsh
Spatially-Aware Diffusion Models with Cross-Attention for Global Field Reconstruction with Sparse Observations
Yilin Zhuang, Sibo Cheng, Karthik Duraisamy
Bridging User Dynamics: Transforming Sequential Recommendations with Schrödinger Bridge and Diffusion Models
Wenjia Xie, Rui Zhou, Hao Wang, Tingjia Shen, Enhong Chen
RISSOLE: Parameter-efficient Diffusion Models via Block-wise Generation and Retrieval-Guidance
Avideep Mukherjee, Soumya Banerjee, Vinay P. Namboodiri, Piyush Rai
Disentangled Diffusion Autoencoder for Harmonization of Multi-site Neuroimaging Data
Ayodeji Ijishakin, Ana Lawry Aguila, Elizabeth Levitis, Ahmed Abdulaal, Andre Altmann, James Cole
GenDDS: Generating Diverse Driving Video Scenarios with Prompt-to-Video Generative Model
Yongjie Fu, Yunlong Li, Xuan Di