Paper ID: 2206.08572

Enhanced Bi-directional Motion Estimation for Video Frame Interpolation

Xin Jin, Longhai Wu, Guotao Shen, Youxin Chen, Jie Chen, Jayoon Koo, Cheul-hee Hahm

We present a novel simple yet effective algorithm for motion-based video frame interpolation. Existing motion-based interpolation methods typically rely on a pre-trained optical flow model or a U-Net based pyramid network for motion estimation, which either suffer from large model size or limited capacity in handling complex and large motion cases. In this work, by carefully integrating intermediateoriented forward-warping, lightweight feature encoder, and correlation volume into a pyramid recurrent framework, we derive a compact model to simultaneously estimate the bidirectional motion between input frames. It is 15 times smaller in size than PWC-Net, yet enables more reliable and flexible handling of challenging motion cases. Based on estimated bi-directional motion, we forward-warp input frames and their context features to intermediate frame, and employ a synthesis network to estimate the intermediate frame from warped representations. Our method achieves excellent performance on a broad range of video frame interpolation benchmarks. Code and trained models are available at \url{https://github.com/srcn-ivl/EBME}.

Submitted: Jun 17, 2022