Video Diffusion Model
Video diffusion models generate videos by iteratively removing noise from random data, guided by text prompts or other conditioning information. Current research focuses on improving temporal consistency, enhancing video quality (especially at high resolutions), and developing efficient algorithms for long video generation and various control mechanisms (e.g., camera control, object manipulation). These advancements are significant for applications in film production, animation, and 3D modeling, offering powerful tools for creating realistic and controllable video content.
Papers
March 22, 2023
February 15, 2023
February 6, 2023
February 2, 2023
December 6, 2022
December 1, 2022
November 23, 2022
November 21, 2022
November 20, 2022
October 5, 2022
June 15, 2022
April 7, 2022