Text to Image Diffusion Model
Text-to-image diffusion models generate images from textual descriptions, aiming for high-fidelity and precise alignment. Current research focuses on improving controllability, addressing safety concerns (e.g., preventing generation of inappropriate content), and enhancing personalization capabilities through techniques like continual learning and latent space manipulation. These advancements are significant for various applications, including medical imaging, artistic creation, and data augmentation, while also raising important ethical considerations regarding model safety and bias.
Papers
Diffusion Soup: Model Merging for Text-to-Image Diffusion Models
Benjamin Biggs, Arjun Seshadri, Yang Zou, Achin Jain, Aditya Golatkar, Yusheng Xie, Alessandro Achille, Ashwin Swaminathan, Stefano Soatto
One-Step Effective Diffusion Network for Real-World Image Super-Resolution
Rongyuan Wu, Lingchen Sun, Zhiyuan Ma, Lei Zhang
DiffUHaul: A Training-Free Method for Object Dragging in Images
Omri Avrahami, Rinon Gal, Gal Chechik, Ohad Fried, Dani Lischinski, Arash Vahdat, Weili Nie
Segmentation-Free Guidance for Text-to-Image Diffusion Models
Kambiz Azarian, Debasmit Das, Qiqi Hou, Fatih Porikli
Dimba: Transformer-Mamba Diffusion Models
Zhengcong Fei, Mingyuan Fan, Changqian Yu, Debang Li, Youqiang Zhang, Junshi Huang
MultiEdits: Simultaneous Multi-Aspect Editing with Text-to-Image Diffusion Models
Mingzhen Huang, Jialing Cai, Shan Jia, Vishnu Suresh Lokhande, Siwei Lyu