Diffusion Model
Diffusion models are generative models that create data by reversing a noise-diffusion process, aiming to generate high-quality samples from complex distributions. Current research focuses on improving efficiency through techniques like stochastic Runge-Kutta methods and dynamic model architectures (e.g., Dynamic Diffusion Transformer), as well as enhancing controllability and safety via methods such as classifier-free guidance and reinforcement learning from human feedback. These advancements are significantly impacting various fields, including medical imaging, robotics, and artistic creation, by enabling novel applications in image generation, inverse problem solving, and multi-modal data synthesis.
Papers
Manifold-Guided Lyapunov Control with Diffusion Models
Amartya Mukherjee, Thanin Quartz, Jun Liu
DiffFAE: Advancing High-fidelity One-shot Facial Appearance Editing with Space-sensitive Customization and Semantic Preservation
Qilin Wang, Jiangning Zhang, Chengming Xu, Weijian Cao, Ying Tai, Yue Han, Yanhao Ge, Hong Gu, Chengjie Wang, Yanwei Fu
DiffGaze: A Diffusion Model for Continuous Gaze Sequence Generation on 360{\deg} Images
Chuhan Jiao, Yao Wang, Guanhua Zhang, Mihai Bâce, Zhiming Hu, Andreas Bulling
AnimateMe: 4D Facial Expressions via Diffusion Models
Dimitrios Gerogiannis, Foivos Paraperas Papantoniou, Rolandos Alexandros Potamias, Alexandros Lattas, Stylianos Moschoglou, Stylianos Ploumpis, Stefanos Zafeiriou
Learning Spatial Adaptation and Temporal Coherence in Diffusion Models for Video Super-Resolution
Zhikai Chen, Fuchen Long, Zhaofan Qiu, Ting Yao, Wengang Zhou, Jiebo Luo, Tao Mei
Graph Bayesian Optimization for Multiplex Influence Maximization
Zirui Yuan, Minglai Shao, Zhiqian Chen
Multiple-Source Localization from a Single-Snapshot Observation Using Graph Bayesian Optimization
Zonghan Zhang, Zijian Zhang, Zhiqian Chen
Improving Diffusion Models's Data-Corruption Resistance using Scheduled Pseudo-Huber Loss
Artem Khrapov, Vadim Popov, Tasnima Sadekova, Assel Yermekova, Mikhail Kudinov
SDXS: Real-Time One-Step Latent Diffusion Models with Image Conditions
Yuda Song, Zehao Sun, Xuanwu Yin
SatSynth: Augmenting Image-Mask Pairs through Diffusion Models for Aerial Semantic Segmentation
Aysim Toker, Marvin Eisenberger, Daniel Cremers, Laura Leal-Taixé
An Intermediate Fusion ViT Enables Efficient Text-Image Alignment in Diffusion Models
Zizhao Hu, Shaochong Jia, Mohammad Rostami
Make-Your-Anchor: A Diffusion-based 2D Avatar Generation Framework
Ziyao Huang, Fan Tang, Yong Zhang, Xiaodong Cun, Juan Cao, Jintao Li, Tong-Yee Lee
Diffusion Model is a Good Pose Estimator from 3D RF-Vision
Junqiao Fan, Jianfei Yang, Yuecong Xu, Lihua Xie
Robust Diffusion Models for Adversarial Purification
Guang Lin, Zerui Tao, Jianhai Zhang, Toshihisa Tanaka, Qibin Zhao
A Unified Module for Accelerating STABLE-DIFFUSION: LCM-LORA
Ayush Thakur, Rashmi Vashisth
An Optimization Framework to Enforce Multi-View Consistency for Texturing 3D Meshes
Zhengyi Zhao, Chen Song, Xiaodong Gu, Yuan Dong, Qi Zuo, Weihao Yuan, Liefeng Bo, Zilong Dong, Qixing Huang
Ultrasound Imaging based on the Variance of a Diffusion Restoration Model
Yuxin Zhang, Clément Huneau, Jérôme Idier, Diana Mateus
Controlled Training Data Generation with Diffusion Models
Teresa Yeo, Andrei Atanov, Harold Benoit, Aleksandr Alekseev, Ruchira Ray, Pooya Esmaeil Akhoondi, Amir Zamir
Spectral Motion Alignment for Video Motion Transfer using Diffusion Models
Geon Yeong Park, Hyeonho Jeong, Sang Wan Lee, Jong Chul Ye
Shadow Generation for Composite Image Using Diffusion model
Qingyang Liu, Junqi You, Jianting Wang, Xinhao Tao, Bo Zhang, Li Niu