Video Diffusion Model
Video diffusion models generate videos by iteratively removing noise from random data, guided by text prompts or other conditioning information. Current research focuses on improving temporal consistency, enhancing video quality (especially at high resolutions), and developing efficient algorithms for long video generation and various control mechanisms (e.g., camera control, object manipulation). These advancements are significant for applications in film production, animation, and 3D modeling, offering powerful tools for creating realistic and controllable video content.
Papers
Video Diffusion Alignment via Reward Gradients
Mihir Prabhudesai, Russell Mendonca, Zheyang Qin, Katerina Fragkiadaki, Deepak Pathak
Live2Diff: Live Stream Translation via Uni-directional Attention in Video Diffusion Models
Zhening Xing, Gereon Fox, Yanhong Zeng, Xingang Pan, Mohamed Elgharib, Christian Theobalt, Kai Chen
FreeTraj: Tuning-Free Trajectory Control in Video Diffusion Models
Haonan Qiu, Zhaoxi Chen, Zhouxia Wang, Yingqing He, Menghan Xia, Ziwei Liu
Dreamitate: Real-World Visuomotor Policy Learning via Video Generation
Junbang Liang, Ruoshi Liu, Ege Ozguroglu, Sruthi Sudhakar, Achal Dave, Pavel Tokmakov, Shuran Song, Carl Vondrick
Video-Infinity: Distributed Long Video Generation
Zhenxiong Tan, Xingyi Yang, Songhua Liu, Xinchao Wang
4Real: Towards Photorealistic 4D Scene Generation via Video Diffusion Models
Heng Yu, Chaoyang Wang, Peiye Zhuang, Willi Menapace, Aliaksandr Siarohin, Junli Cao, Laszlo A Jeni, Sergey Tulyakov, Hsin-Ying Lee
AsyncDiff: Parallelizing Diffusion Models by Asynchronous Denoising
Zigeng Chen, Xinyin Ma, Gongfan Fang, Zhenxiong Tan, Xinchao Wang