Image to Video Generation
Image-to-video generation aims to create realistic and temporally consistent video sequences from a single input image, often incorporating additional textual or other conditional inputs. Current research heavily utilizes diffusion models, often enhanced with modules for physics simulation, camera control, and motion awareness to improve realism and controllability. These advancements are improving video editing capabilities, enabling applications such as animation creation, interactive image manipulation, and enhancing online shopping experiences through dynamic fashion displays. The field is actively addressing challenges like maintaining visual consistency across frames and achieving precise control over generated motion.
Papers
November 7, 2024
September 27, 2024
June 22, 2024
June 12, 2024
June 4, 2024
May 21, 2024
May 7, 2024
April 25, 2024
April 18, 2024
March 25, 2024
March 21, 2024
March 13, 2024
March 4, 2024
February 6, 2024
December 5, 2023
November 29, 2023
November 19, 2023