Pre Trained Diffusion Model
Pre-trained diffusion models are generative models used as powerful priors for solving various inverse problems, particularly in image processing and generation. Current research focuses on improving efficiency (e.g., one-step methods, faster sampling algorithms), enhancing control over generation (e.g., through guidance mechanisms and fine-tuning strategies like LoRA), and addressing security concerns (e.g., mitigating membership inference attacks). This work is significant because it leverages the strong generative capabilities of these models to achieve state-of-the-art results in diverse applications, ranging from image restoration and super-resolution to more complex tasks like image composition and 3D reconstruction.
Papers
GLoD: Composing Global Contexts and Local Details in Image Generation
Moyuru Yamada
Reconstructing the Image Stitching Pipeline: Integrating Fusion and Rectangling into a Unified Inpainting Model
Ziqi Xie, Weidong Zhao, Xianhui Liu, Jian Zhao, Ning Jia
Gradient Guidance for Diffusion Models: An Optimization Perspective
Yingqing Guo, Hui Yuan, Yukang Yang, Minshuo Chen, Mengdi Wang
Towards Memorization-Free Diffusion Models
Chen Chen, Daochang Liu, Chang Xu
Model-Agnostic Human Preference Inversion in Diffusion Models
Jeeyung Kim, Ze Wang, Qiang Qiu
TryOn-Adapter: Efficient Fine-Grained Clothing Identity Adaptation for High-Fidelity Virtual Try-On
Jiazheng Xing, Chao Xu, Yijie Qian, Yang Liu, Guang Dai, Baigui Sun, Yong Liu, Jingdong Wang
Latency-Aware Generative Semantic Communications with Pre-Trained Diffusion Models
Li Qiao, Mahdi Boloursaz Mashhadi, Zhen Gao, Chuan Heng Foh, Pei Xiao, Mehdi Bennis
Invertible Diffusion Models for Compressed Sensing
Bin Chen, Zhenyu Zhang, Weiqi Li, Chen Zhao, Jiwen Yu, Shijie Zhao, Jie Chen, Jian Zhang