Text to Image Diffusion Model
Text-to-image diffusion models generate images from textual descriptions, aiming for high-fidelity and precise alignment. Current research focuses on improving controllability, addressing safety concerns (e.g., preventing generation of inappropriate content), and enhancing personalization capabilities through techniques like continual learning and latent space manipulation. These advancements are significant for various applications, including medical imaging, artistic creation, and data augmentation, while also raising important ethical considerations regarding model safety and bias.
Papers
Vision-guided and Mask-enhanced Adaptive Denoising for Prompt-based Image Editing
Kejie Wang, Xuemeng Song, Meng Liu, Jin Yuan, Weili Guan
Towards Reliable Verification of Unauthorized Data Usage in Personalized Text-to-Image Diffusion Models
Boheng Li, Yanhao Wei, Yankai Fu, Zhenting Wang, Yiming Li, Jie Zhang, Run Wang, Tianwei Zhang
HybridBooth: Hybrid Prompt Inversion for Efficient Subject-Driven Generation
Shanyan Guan, Yanhao Ge, Ying Tai, Jian Yang, Wei Li, Mingyu You
Unstable Unlearning: The Hidden Risk of Concept Resurgence in Diffusion Models
Vinith M. Suriyakumar, Rohan Alur, Ayush Sekhari, Manish Raghavan, Ashia C. Wilson
Sparse Repellency for Shielded Generation in Text-to-image Diffusion Models
Michael Kirchhof, James Thornton, Pierre Ablin, Louis Béthune, Eugene Ndiaye, Marco Cuturi
Holistic Unlearning Benchmark: A Multi-Faceted Evaluation for Text-to-Image Diffusion Model Unlearning
Saemi Moon, Minjong Lee, Sangdon Park, Dongwoo Kim
Mining Your Own Secrets: Diffusion Classifier Scores for Continual Personalization of Text-to-Image Diffusion Models
Saurav Jha, Shiqi Yang, Masato Ishii, Mengjie Zhao, Christian Simon, Jehanzeb Mirza, Dong Gong, Lina Yao, Shusuke Takahashi, Yuki Mitsufuji
RadGazeGen: Radiomics and Gaze-guided Medical Image Generation using Diffusion Models
Moinak Bhattacharya, Gagandeep Singh, Shubham Jain, Prateek Prasanna