Controllable Video Generation
Controllable video generation aims to create videos that precisely match user-specified parameters, going beyond simple text prompts to encompass fine-grained control over object motion, camera angles, and even scene composition. Current research heavily utilizes diffusion models, often incorporating attention mechanisms and adapters to integrate diverse control signals (e.g., bounding boxes, trajectories, masks, language descriptions) into the generation process. This field is significant for its potential to revolutionize applications ranging from autonomous driving simulation and robot planning to animation and visual effects, providing high-quality, customizable video data for training and creative purposes.
Papers
January 14, 2025
January 13, 2025
January 3, 2025
December 27, 2024
December 18, 2024
December 14, 2024
December 12, 2024
December 10, 2024
December 4, 2024
December 2, 2024
November 28, 2024
November 21, 2024
November 16, 2024
November 13, 2024
November 7, 2024
October 14, 2024
September 10, 2024
August 22, 2024
August 21, 2024
August 19, 2024