Video Diffusion Model
Video diffusion models generate videos by iteratively removing noise from random data, guided by text prompts or other conditioning information. Current research focuses on improving temporal consistency, enhancing video quality (especially at high resolutions), and developing efficient algorithms for long video generation and various control mechanisms (e.g., camera control, object manipulation). These advancements are significant for applications in film production, animation, and 3D modeling, offering powerful tools for creating realistic and controllable video content.
Papers
November 27, 2023
November 26, 2023
November 25, 2023
November 24, 2023
October 31, 2023
October 25, 2023
October 23, 2023
October 16, 2023
October 11, 2023
September 29, 2023
September 27, 2023
September 26, 2023
July 26, 2023
June 5, 2023
June 2, 2023
May 29, 2023
May 17, 2023
May 15, 2023