Style Representation
Style representation research focuses on effectively capturing and manipulating the stylistic aspects of various data modalities, such as images, audio, and text, to achieve tasks like style transfer, generation, and classification. Current research employs diverse approaches, including generative adversarial networks (GANs), diffusion models, autoencoders, and various attention mechanisms, often within a framework of disentangling content and style representations. This field is crucial for advancing applications in art generation, text-to-speech synthesis, and other areas requiring nuanced control over stylistic features, impacting both artistic creation and scientific understanding of style perception and representation.
Papers
DreamFactory: Pioneering Multi-Scene Long Video Generation with a Multi-Agent Framework
Zhifei Xie, Daniel Tang, Dingwei Tan, Jacques Klein, Tegawend F. Bissyand, Saad Ezzini
JieHua Paintings Style Feature Extracting Model using Stable Diffusion with ControlNet
Yujia Gu, Haofeng Li, Xinyu Fang, Zihan Peng, Yinan Peng