Sketch to Image
Sketch-to-image synthesis aims to generate realistic images from user-provided sketches, bridging the gap between artistic expression and digital image creation. Current research focuses on improving the fidelity and controllability of generated images, employing diffusion models and generative adversarial networks (GANs) often incorporating techniques like ControlNet for precise sketch adherence or multi-scale feature fusion for detail preservation. These advancements are impacting various fields, including fashion design, medical imaging, and creative tools, by enabling more intuitive and efficient image generation and manipulation based on user sketches.
Papers
Sketch2Saliency: Learning to Detect Salient Objects from Human Drawings
Ayan Kumar Bhunia, Subhadeep Koley, Amandeep Kumar, Aneeshan Sain, Pinaki Nath Chowdhury, Tao Xiang, Yi-Zhe Song
Picture that Sketch: Photorealistic Image Generation from Abstract Sketches
Subhadeep Koley, Ayan Kumar Bhunia, Aneeshan Sain, Pinaki Nath Chowdhury, Tao Xiang, Yi-Zhe Song