Controllable Video Generation
Controllable video generation aims to create videos that precisely match user-specified parameters, going beyond simple text prompts to encompass fine-grained control over object motion, camera angles, and even scene composition. Current research heavily utilizes diffusion models, often incorporating attention mechanisms and adapters to integrate diverse control signals (e.g., bounding boxes, trajectories, masks, language descriptions) into the generation process. This field is significant for its potential to revolutionize applications ranging from autonomous driving simulation and robot planning to animation and visual effects, providing high-quality, customizable video data for training and creative purposes.
Papers
November 13, 2024
November 7, 2024
October 14, 2024
September 10, 2024
August 22, 2024
August 21, 2024
August 19, 2024
August 14, 2024
July 17, 2024
July 8, 2024
June 28, 2024
June 26, 2024
June 24, 2024
June 9, 2024
June 8, 2024
May 30, 2024
May 7, 2024
April 15, 2024
March 21, 2024