Video Diffusion Distillation
Video diffusion distillation aims to leverage the efficiency of image diffusion models for generating high-quality videos, addressing limitations in existing video diffusion models, particularly concerning motion consistency and visual fidelity. Current research focuses on disentangling motion and appearance information within video data, employing techniques like temporal attention adaptation and static-dynamic memory blocks to improve motion control and reduce computational cost. These advancements are significant for enhancing the efficiency and quality of video generation, with applications ranging from text-to-video synthesis to video editing and enhancement.
Papers
June 11, 2024
December 1, 2023
October 4, 2023
March 27, 2023
February 18, 2023