Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
Papers
Eliminating Oversaturation and Artifacts of High Guidance Scales in Diffusion Models
Seyedmorteza Sadat, Otmar Hilliges, Romann M. Weber
Unleashing the Potential of the Diffusion Model in Few-shot Semantic Segmentation
Muzhi Zhu, Yang Liu, Zekai Luo, Chenchen Jing, Hao Chen, Guangkai Xu, Xinlong Wang, Chunhua Shen
Convergence of Score-Based Discrete Diffusion Models: A Discrete-Time Analysis
Zikun Zhang, Zixiang Chen, Quanquan Gu
SoundMorpher: Perceptually-Uniform Sound Morphing with Diffusion Model
Xinlei Niu, Jing Zhang, Charles Patrick Martin
Using Style Ambiguity Loss to Improve Aesthetics of Diffusion Models
James Baker
VitaGlyph: Vitalizing Artistic Typography with Flexible Dual-branch Diffusion Models
Kailai Feng, Yabo Zhang, Haodong Yu, Zhilong Ji, Jinfeng Bai, Hongzhi Zhang, Wangmeng Zuo
HarmoniCa: Harmonizing Training and Inference for Better Feature Cache in Diffusion Transformer Acceleration
Yushi Huang, Zining Wang, Ruihao Gong, Jing Liu, Xinjie Zhang, Jinyang Guo, Xianglong Liu, Jun Zhang
Edge-preserving noise for diffusion models
Jente Vandersanden, Sascha Holl, Xingchang Huang, Gurprit Singh
Aggregation of Multi Diffusion Models for Enhancing Learned Representations
Conghan Yue, Zhengwei Peng, Shiyan Du, Zhi Ji, Dongyu Zhang
Improved Generation of Synthetic Imaging Data Using Feature-Aligned Diffusion
Lakshmi Nair
NECOMIMI: Neural-Cognitive Multimodal EEG-informed Image Generation with Diffusion Models
Chi-Sheng Chen
CusConcept: Customized Visual Concept Decomposition with Diffusion Models
Zhi Xu, Shaozhe Hao, Kai Han
A Cat Is A Cat (Not A Dog!): Unraveling Information Mix-ups in Text-to-Image Encoders through Causal Analysis and Embedding Optimization
Chieh-Yun Chen, Li-Wu Tsao, Chiang Tseng, Hong-Han Shuai
RadGazeGen: Radiomics and Gaze-guided Medical Image Generation using Diffusion Models
Moinak Bhattacharya, Gagandeep Singh, Shubham Jain, Prateek Prasanna
ACE: All-round Creator and Editor Following Instructions via Diffusion Transformer
Zhen Han, Zeyinzi Jiang, Yulin Pan, Jingfeng Zhang, Chaojie Mao, Chenwei Xie, Yu Liu, Jingren Zhou
A Survey on Diffusion Models for Inverse Problems
Giannis Daras, Hyungjin Chung, Chieh-Hsin Lai, Yuki Mitsufuji, Jong Chul Ye, Peyman Milanfar, Alexandros G. Dimakis, Mauricio Delbracio
Erase, then Redraw: A Novel Data Augmentation Approach for Free Space Detection Using Diffusion Model
Fulong Ma, Weiqing Qi, Guoyang Zhao, Ming Liu, Jun Ma
RoCoTex: A Robust Method for Consistent Texture Synthesis with Diffusion Models
Jangyeong Kim, Donggoo Kang, Junyoung Choi, Jeonga Wi, Junho Gwon, Jiun Bae, Dumim Yoon, Junghyun Han
Image Copy Detection for Diffusion Models
Wenhao Wang, Yifan Sun, Zhentao Tan, Yi Yang
GameLabel-10K: Collecting Image Preference Data Through Mobile Game Crowdsourcing
Jonathan Zhou