Paper ID: 2311.12803

On Copyright Risks of Text-to-Image Diffusion Models

Yang Zhang, Teoh Tze Tzun, Lim Wei Hern, Haonan Wang, Kenji Kawaguchi

Diffusion models excel in many generative modeling tasks, notably in creating images from text prompts, a task referred to as text-to-image (T2I) generation. Despite the ability to generate high-quality images, these models often replicate elements from their training data, leading to increasing copyright concerns in real applications in recent years. In response to this raising concern about copyright infringement, recent studies have studied the copyright behavior of diffusion models when using direct, copyrighted prompts. Our research extends this by examining subtler forms of infringement, where even indirect prompts can trigger copyright issues. Specifically, we introduce a data generation pipeline to systematically produce data for studying copyright in diffusion models. Our pipeline enables us to investigate copyright infringement in a more practical setting, involving replicating visual features rather than entire works using seemingly irrelevant prompts for T2I generation. We generate data using our proposed pipeline to test various diffusion models, including the latest Stable Diffusion XL. Our findings reveal a widespread tendency that these models tend to produce copyright-infringing content, highlighting a significant challenge in this field.

Submitted: Sep 15, 2023