Text to Video
Text-to-video (T2V) generation aims to create realistic videos from textual descriptions, focusing on improving temporal consistency, handling multiple objects and actions, and enhancing controllability. Current research heavily utilizes diffusion models, often building upon pre-trained text-to-image models and incorporating advanced architectures like Diffusion Transformers (DiT) and spatial-temporal attention mechanisms to improve video quality and coherence. This rapidly evolving field holds significant implications for content creation, education, and various other applications, driving advancements in both model architectures and evaluation methodologies to address challenges like hallucination and compositional generation.
Papers
AVID: Any-Length Video Inpainting with Diffusion Model
Zhixing Zhang, Bichen Wu, Xiaoyan Wang, Yaqiao Luo, Luxin Zhang, Yinan Zhao, Peter Vajda, Dimitris Metaxas, Licheng Yu
AnimateZero: Video Diffusion Models are Zero-Shot Image Animators
Jiwen Yu, Xiaodong Cun, Chenyang Qi, Yong Zhang, Xintao Wang, Ying Shan, Jian Zhang
F3-Pruning: A Training-Free and Generalized Pruning Strategy towards Faster and Finer Text-to-Video Synthesis
Sitong Su, Jianzhi Liu, Lianli Gao, Jingkuan Song