Paper ID: 2204.07924

StyleT2F: Generating Human Faces from Textual Description Using StyleGAN2

Mohamed Shawky Sabae, Mohamed Ahmed Dardir, Remonda Talaat Eskarous, Mohamed Ramzy Ebbed

AI-driven image generation has improved significantly in recent years. Generative adversarial networks (GANs), like StyleGAN, are able to generate high-quality realistic data and have artistic control over the output, as well. In this work, we present StyleT2F, a method of controlling the output of StyleGAN2 using text, in order to be able to generate a detailed human face from textual description. We utilize StyleGAN's latent space to manipulate different facial features and conditionally sample the required latent code, which embeds the facial features mentioned in the input text. Our method proves to capture the required features correctly and shows consistency between the input text and the output images. Moreover, our method guarantees disentanglement on manipulating a wide range of facial features that sufficiently describes a human face.

Submitted: Apr 17, 2022