Video LDM
Video Latent Diffusion Models (LDMs) are a class of generative models aiming to create high-quality videos, often conditioned on text or other modalities, by leveraging the efficiency of diffusion processes in a compressed latent space. Current research focuses on improving temporal coherence, incorporating multi-modal information (e.g., audio, text), and adapting pre-trained image LDMs for video editing and generation tasks. These advancements are significant for applications ranging from realistic video synthesis and editing to data augmentation for scientific simulations and medical image enhancement, offering improvements in both speed and quality compared to previous methods.
Papers
October 8, 2024
October 2, 2024
August 27, 2024
March 15, 2024
December 13, 2023
December 2, 2023
October 25, 2023
September 25, 2023
August 23, 2023
April 18, 2023
December 1, 2022
November 20, 2022