Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
Papers
Learning Feature-Preserving Portrait Editing from Generated Pairs
Bowei Chen, Tiancheng Zhi, Peihao Zhu, Shen Sang, Jing Liu, Linjie Luo
Specify and Edit: Overcoming Ambiguity in Text-Based Image Editing
Ekaterina Iakovleva, Fabio Pizzati, Philip Torr, Stéphane Lathuilière
LatentArtiFusion: An Effective and Efficient Histological Artifacts Restoration Framework
Zhenqi He, Wenrui Liu, Minghao Yin, Kai Han
FedDEO: Description-Enhanced One-Shot Federated Learning with Diffusion Models
Mingzhao Yang, Shangchao Su, Bin Li, Xiangyang Xue
Map2Traj: Street Map Piloted Zero-shot Trajectory Generation with Diffusion Model
Zhenyu Tao, Wei Xu, Xiaohu You
Retinex-Diffusion: On Controlling Illumination Conditions in Diffusion Models via Retinex Theory
Xiaoyan Xing, Vincent Tao Hu, Jan Hendrik Metzen, Konrad Groh, Sezer Karaoglu, Theo Gevers
Temporal Feature Matters: A Framework for Diffusion Model Quantization
Yushi Huang, Ruihao Gong, Xianglong Liu, Jing Liu, Yuhang Li, Jiwen Lu, Dacheng Tao
FIND: Fine-tuning Initial Noise Distribution with Policy Optimization for Diffusion Models
Changgu Chen, Libing Yang, Xiaoyan Yang, Lianggangxu Chen, Gaoqi He, CHangbo Wang, Yang Li
ClickDiff: Click to Induce Semantic Contact Map for Controllable Grasp Generation with Diffusion Models
Peiming Li, Ziyi Wang, Mengyuan Liu, Hong Liu, Chen Chen
Unifying Visual and Semantic Feature Spaces with Diffusion Models for Enhanced Cross-Modal Alignment
Yuze Zheng, Zixuan Li, Xiangxian Li, Jinxing Liu, Yuqing Wang, Xiangxu Meng, Lei Meng
How To Segment in 3D Using 2D Models: Automated 3D Segmentation of Prostate Cancer Metastatic Lesions on PET Volumes Using Multi-Angle Maximum Intensity Projections and Diffusion Models
Amirhosein Toosi, Sara Harsini, François Bénard, Carlos Uribe, Arman Rahmim
Answerability Fields: Answerable Location Estimation via Diffusion Models
Daichi Azuma, Taiki Miyanishi, Shuhei Kurita, Koya Sakamoto, Motoaki Kawanabe
RegionDrag: Fast Region-Based Image Editing with Diffusion Models
Jingyi Lu, Xinghui Li, Kai Han
Self-supervised pre-training with diffusion model for few-shot landmark detection in x-ray images
Roberto Di Via, Francesca Odone, Vito Paolo Pastore
Segmentation-guided MRI reconstruction for meaningfully diverse reconstructions
Jan Nikolas Morshuis, Matthias Hein, Christian F. Baumgartner
Self-Supervision Improves Diffusion Models for Tabular Data Imputation
Yixin Liu, Thalaiyasingam Ajanthan, Hisham Husain, Vu Nguyen