Video Generative
Video generative models aim to create realistic and diverse videos from various inputs, such as text descriptions or single images, focusing on improving efficiency, controllability, and multi-modal alignment. Current research emphasizes advancements in diffusion models, variational autoencoders (VAEs), and transformer-based architectures, often incorporating techniques like latent space compression and inter-frame motion consistency to enhance generation speed and quality. These advancements have implications for various fields, including video editing, content creation, robotics (through action-conditional generation), and even combating the spread of misinformation via improved detection and tracing of synthetic videos.
Papers
October 8, 2024
September 19, 2024
August 21, 2024
August 13, 2024
May 30, 2024
May 28, 2024
May 26, 2024
May 24, 2024
May 21, 2024
April 10, 2024
March 18, 2024
February 20, 2024
January 30, 2024
December 20, 2023
November 29, 2023
November 25, 2023