Frame Interpolation for Dynamic Scenes with Implicit Flow Encoding

Texas A&M University

Accepted to WACV 2023

Teaser image of the paper

FrameintIFE interpolates challenging near-duplicate photos, creating a slow motion video that depicts the natural transition between them.

Comparisons

ABME

FILM

frameintIFE

Abstract

In this paper, we propose an algorithm to interpolate between a pair of images of a dynamic scene. While in the past years significant progress in frame interpolation has been made, current approaches are not able to handle images with brightness and illumination changes, which are common even when the images are captured shortly apart. We propose to address this problem by taking advantage of the existing optical flow methods that are highly robust to the variations in the illumination. Specifically, using the bidirectional flows estimated using an existing pre-trained flow network, we predict the flows from an intermediate frame to the two input images. To do this, we propose to encode the bidirectional flows into a coordinate-based network, powered by a hypernetwork, to obtain a continuous representation of the flow across time. Once we obtain the estimated flows, we use them within an existing blending network to obtain the final intermediate frame. Through extensive experiments, we demonstrate that our approach is able to produce significantly better results than state-of-the-art frame interpolation algorithms.

Hypernetwork Ablation

Synthetic Example

Synthetic example of hypernet ablation image

Real Example

real example of hypernet ablation image

Architecture Overview

architecture image

Additional Examples

Supplementary Video

BibTeX

@misc{figueiredo2022frameintife,
      doi = {10.48550/ARXIV.2209.13284},
      url = {https://arxiv.org/abs/2209.13284},
      author = {Figueirêdo, Pedro and Paliwal, Avinash and Kalantari, Nima Khademi},
      title = {Frame Interpolation for Dynamic Scenes with Implicit Flow Encoding},
      publisher = {arXiv},
      year = {2022}
    }