Text to Image Generative Model
Text-to-image generative models synthesize realistic images from textual descriptions, aiming to bridge the gap between human language and visual representation. Current research heavily focuses on addressing biases inherent in these models (often stemming from training data), improving compositional accuracy and mitigating vulnerabilities to adversarial attacks, including poisoning and backdoor attacks. These models are rapidly impacting various fields, from art and design to scientific visualization, but their ethical implications and potential for misuse necessitate ongoing investigation into robustness, fairness, and accountability.
31papers
Papers
October 22, 2024
October 15, 2024
October 10, 2024
September 18, 2024
April 11, 2024
OpenBias: Open-set Bias Detection in Text-to-Image Generative Models
Moreno D'Incà, Elia Peruzzo, Massimiliano Mancini, Dejia Xu, Vidit Goel, Xingqian Xu, Zhangyang Wang, Humphrey Shi, Nicu SebeRethinking Artistic Copyright Infringements in the Era of Text-to-Image Generative Models
Mazda Moayeri, Samyadeep Basu, Sriram Balasubramanian, Priyatham Kattakinda, Atoosa Chengini, Robert Brauneis, Soheil Feizi