Style Transfer
Style transfer aims to modify the visual or auditory style of data (images, audio, 3D scenes, text) while preserving its content. Current research focuses on developing efficient and controllable style transfer methods, employing architectures like diffusion models, neural radiance fields, transformers, and Gaussian splatting, often incorporating techniques like attention mechanisms and optimization-based approaches to achieve training-free or few-shot learning. These advancements are impacting diverse fields, including image editing, 3D modeling, audio processing, and natural language processing, by enabling more creative control and efficient manipulation of multimedia data. The development of high-quality, controllable style transfer methods is crucial for applications ranging from artistic expression to medical image analysis.
Papers
Style Injection in Diffusion: A Training-free Approach for Adapting Large-scale Diffusion Models for Style Transfer
Jiwoo Chung, Sangeek Hyun, Jae-Pil Heo
ArtBank: Artistic Style Transfer with Pre-trained Diffusion Model and Implicit Style Prompt Bank
Zhanjie Zhang, Quanwei Zhang, Guangyuan Li, Wei Xing, Lei Zhao, Jiakai Sun, Zehua Lan, Junsheng Luan, Yiling Huang, Huaizhong Lin