Image to Video Diffusion Model

Image-to-video diffusion models aim to generate realistic videos from a single input image, addressing challenges in video synthesis like temporal consistency and motion generation. Current research focuses on improving controllability through techniques like incorporating reference images, adapting models for tasks such as keyframe interpolation and video inpainting, and mitigating issues like conditional image leakage and limited motion. These advancements are significant for various applications, including animation, video editing, and enhancing existing video content by improving quality and realism.

Papers