Video Frame Interpolation
Video frame interpolation (VFI) aims to generate realistic intermediate frames between existing ones in a video sequence, increasing frame rate and improving visual smoothness. Current research heavily focuses on improving accuracy and efficiency, exploring various model architectures including convolutional neural networks, transformers, and diffusion models, often incorporating techniques like optical flow estimation, motion modeling, and multi-modal data fusion (e.g., combining RGB and event camera data). These advancements have significant implications for applications such as slow-motion video generation, video upscaling, and enhancing the quality of existing video content, driving improvements in both entertainment and scientific visualization.
Papers
Enhancing Deformable Convolution based Video Frame Interpolation with Coarse-to-fine 3D CNN
Duolikun Danier, Fan Zhang, David Bull
A Subjective Quality Study for Video Frame Interpolation
Duolikun Danier, Fan Zhang, David Bull
Exploring Discontinuity for Video Frame Interpolation
Sangjin Lee, Hyeongmin Lee, Chajin Shin, Hanbin Son, Sangyoun Lee