Style Transfer
Style transfer aims to modify the visual or auditory style of data (images, audio, 3D scenes, text) while preserving its content. Current research focuses on developing efficient and controllable style transfer methods, employing architectures like diffusion models, neural radiance fields, transformers, and Gaussian splatting, often incorporating techniques like attention mechanisms and optimization-based approaches to achieve training-free or few-shot learning. These advancements are impacting diverse fields, including image editing, 3D modeling, audio processing, and natural language processing, by enabling more creative control and efficient manipulation of multimedia data. The development of high-quality, controllable style transfer methods is crucial for applications ranging from artistic expression to medical image analysis.
Papers
Authorship Style Transfer with Policy Optimization
Shuai Liu, Shantanu Agarwal, Jonathan May
StyleGaussian: Instant 3D Style Transfer with Gaussian Splatting
Kunhao Liu, Fangneng Zhan, Muyu Xu, Christian Theobalt, Ling Shao, Shijian Lu
Gender-ambiguous voice generation through feminine speaking style transfer in male voices
Maria Koutsogiannaki, Shafel Mc Dowall, Ioannis Agiomyrgiannakis
ConRF: Zero-shot Stylization of 3D Scenes with Conditioned Radiation Fields
Xingyu Miao, Yang Bai, Haoran Duan, Fan Wan, Yawen Huang, Yang Long, Yefeng Zheng
Phrase Grounding-based Style Transfer for Single-Domain Generalized Object Detection
Hao Li, Wei Wang, Cong Wang, Zhigang Luo, Xinwang Liu, Kenli Li, Xiaochun Cao