Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
Papers
Temporal Feature Matters: A Framework for Diffusion Model Quantization
Yushi Huang, Ruihao Gong, Xianglong Liu, Jing Liu, Yuhang Li, Jiwen Lu, Dacheng Tao
FIND: Fine-tuning Initial Noise Distribution with Policy Optimization for Diffusion Models
Changgu Chen, Libing Yang, Xiaoyan Yang, Lianggangxu Chen, Gaoqi He, CHangbo Wang, Yang Li
ClickDiff: Click to Induce Semantic Contact Map for Controllable Grasp Generation with Diffusion Models
Peiming Li, Ziyi Wang, Mengyuan Liu, Hong Liu, Chen Chen
Unifying Visual and Semantic Feature Spaces with Diffusion Models for Enhanced Cross-Modal Alignment
Yuze Zheng, Zixuan Li, Xiangxian Li, Jinxing Liu, Yuqing Wang, Xiangxu Meng, Lei Meng
How to Segment in 3D Using 2D Models: Automated 3D Segmentation of Prostate Cancer Metastatic Lesions on PET Volumes Using Multi-angle Maximum Intensity Projections and Diffusion Models
Amirhosein Toosi, Sara Harsini, François Bénard, Carlos Uribe, Arman Rahmim
Answerability Fields: Answerable Location Estimation via Diffusion Models
Daichi Azuma, Taiki Miyanishi, Shuhei Kurita, Koya Sakamoto, Motoaki Kawanabe
RegionDrag: Fast Region-Based Image Editing with Diffusion Models
Jingyi Lu, Xinghui Li, Kai Han
Self-supervised pre-training with diffusion model for few-shot landmark detection in x-ray images
Roberto Di Via, Francesca Odone, Vito Paolo Pastore
Segmentation-guided MRI reconstruction for meaningfully diverse reconstructions
Jan Nikolas Morshuis, Matthias Hein, Christian F. Baumgartner
Self-Supervision Improves Diffusion Models for Tabular Data Imputation
Yixin Liu, Thalaiyasingam Ajanthan, Hisham Husain, Vu Nguyen
Diffusion Models For Multi-Modal Generative Modeling
Changyou Chen, Han Ding, Bunyamin Sisman, Yi Xu, Ouye Xie, Benjamin Z. Yao, Son Dinh Tran, Belinda Zeng
LPGen: Enhancing High-Fidelity Landscape Painting Generation through Diffusion Model
Wanggong Yang, Xiaona Wang, Yingrui Qiu, Yifei Zhao
Unpaired Photo-realistic Image Deraining with Energy-informed Diffusion Model
Yuanbo Wen, Tao Gao, Ting Chen
MemBench: Memorized Image Trigger Prompt Dataset for Diffusion Models
Chunsan Hong, Tae-Hyun Oh, Minhyuk Sung
Sparse Inducing Points in Deep Gaussian Processes: Enhancing Modeling with Denoising Diffusion Variational Inference
Jian Xu, Delu Zeng, John Paisley
Diffree: Text-Guided Shape Free Object Inpainting with Diffusion Model
Lirui Zhao, Tianshuo Yang, Wenqi Shao, Yuxin Zhang, Yu Qiao, Ping Luo, Kaipeng Zhang, Rongrong Ji
SAR to Optical Image Translation with Color Supervised Diffusion Model
Xinyu Bai, Feng Xu