Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
Papers
Drug Discovery SMILES-to-Pharmacokinetics Diffusion Models with Deep Molecular Understanding
Bing Hu, Anita Layton, Helen Chen
DifuzCam: Replacing Camera Lens with a Mask and a Diffusion Model
Erez Yosef, Raja Giryes
Unsupervised Blind Joint Dereverberation and Room Acoustics Estimation with Diffusion Models
Jean-Marie Lemercier, Eloi Moliner, Simon Welker, Vesa Välimäki, Timo Gerkmann
KIND: Knowledge Integration and Diversion in Diffusion Models
Yucheng Xie, Fu Feng, Jing Wang, Xin Geng, Yong Rui
GRIF-DM: Generation of Rich Impression Fonts using Diffusion Models
Lei Kang, Fei Yang, Kai Wang, Mohamed Ali Souibgui, Lluis Gomez, Alicia Fornés, Ernest Valveny, Dimosthenis Karatzas
Low-Bitwidth Floating Point Quantization for Efficient High-Quality Diffusion Models
Cheng Chen, Christina Giannoula, Andreas Moshovos
DiffSG: A Generative Solver for Network Optimization with Diffusion Model
Ruihuai Liang, Bo Yang, Zhiwen Yu, Bin Guo, Xuelin Cao, Mérouane Debbah, H. Vincent Poor, Chau Yuen
Efficient and Scalable Point Cloud Generation with Sparse Point-Voxel Diffusion Models
Ioannis Romanelis, Vlassios Fotis, Athanasios Kalogeras, Christos Alexakos, Konstantinos Moustakas, Adrian Munteanu
Diffuse-UDA: Addressing Unsupervised Domain Adaptation in Medical Image Segmentation with Appearance and Structure Aligned Diffusion Models
Haifan Gong, Yitao Wang, Yihan Wang, Jiashun Xiao, Xiang Wan, Haofeng Li
A Simple Early Exiting Framework for Accelerated Sampling in Diffusion Models
Taehong Moon, Moonseok Choi, EungGu Yun, Jongmin Yoon, Gayoung Lee, Jaewoong Cho, Juho Lee
BRAT: Bonus oRthogonAl Token for Architecture Agnostic Textual Inversion
James Baker
LLDif: Diffusion Models for Low-light Emotion Recognition
Zhifeng Wang, Kaihao Zhang, Ramesh Sankaranarayana
Connective Viewpoints of Signal-to-Noise Diffusion Models
Khanh Doan, Long Tung Vuong, Tuan Nguyen, Anh Tuan Bui, Quyen Tran, Thanh-Toan Do, Dinh Phung, Trung Le