Text to Video Generation
Text-to-video generation aims to create videos from textual descriptions, bridging the gap between human language and visual media. Current research heavily utilizes diffusion models, often incorporating 3D U-Nets or transformer architectures, and focuses on improving video quality, temporal consistency, controllability (including camera movement and object manipulation), and compositional capabilities—the ability to synthesize videos with multiple interacting elements. These advancements hold significant implications for various fields, including film production, animation, and virtual reality, by automating video creation and enabling more precise control over generated content.
Papers
TALC: Time-Aligned Captions for Multi-Scene Text-to-Video Generation
Hritik Bansal, Yonatan Bitton, Michal Yarom, Idan Szpektor, Aditya Grover, Kai-Wei Chang
Vidu: a Highly Consistent, Dynamic and Skilled Text-to-Video Generator with Diffusion Models
Fan Bao, Chendong Xiang, Gang Yue, Guande He, Hongzhou Zhu, Kaiwen Zheng, Min Zhao, Shilong Liu, Yaole Wang, Jun Zhu