Controllable Video Generation
Controllable video generation aims to create videos that precisely match user-specified parameters, going beyond simple text prompts to encompass fine-grained control over object motion, camera angles, and even scene composition. Current research heavily utilizes diffusion models, often incorporating attention mechanisms and adapters to integrate diverse control signals (e.g., bounding boxes, trajectories, masks, language descriptions) into the generation process. This field is significant for its potential to revolutionize applications ranging from autonomous driving simulation and robot planning to animation and visual effects, providing high-quality, customizable video data for training and creative purposes.
Papers
March 12, 2024
February 5, 2024
January 3, 2024
December 5, 2023
November 28, 2023
September 18, 2023
August 16, 2023
July 13, 2023
March 9, 2023
April 13, 2022
November 24, 2021