Paper ID: 2402.00769 • Published Feb 1, 2024
AnimateLCM: Computation-Efficient Personalized Style Video Generation without Personalized Video Data
Fu-Yun Wang, Zhaoyang Huang, Weikang Bian, Xiaoyu Shi, Keqiang Sun, Guanglu Song, Yu Liu, Hongsheng Li
TL;DR
Get AI-generated summaries with premium
Get AI-generated summaries with premium
This paper introduces an effective method for computation-efficient
personalized style video generation without requiring access to any
personalized video data. It reduces the necessary generation time of similarly
sized video diffusion models from 25 seconds to around 1 second while
maintaining the same level of performance. The method's effectiveness lies in
its dual-level decoupling learning approach: 1) separating the learning of
video style from video generation acceleration, which allows for personalized
style video generation without any personalized style video data, and 2)
separating the acceleration of image generation from the acceleration of video
motion generation, enhancing training efficiency and mitigating the negative
effects of low-quality video data.