MaskINT: Video Editing via Interpolative Non-autoregressive Masked Transformers

1University of California, Irvine 2GenAI, Meta 3National University of Singapore
CVPR 2024

Abstract

Recent advances in generative AI have significantly enhanced image and video editing, particularly in the context of text prompt control. State-of-the-art approaches predominantly rely on diffusion models to accomplish these tasks. However, the computational demands of diffusion-based methods are substantial, often necessitating large-scale paired datasets for training, and therefore challenging the deployment in practical applications. This study addresses this challenge by breaking down the text-based video editing process into two separate stages.

In the first stage, we leverage an existing text-to-image diffusion model to simultaneously edit a few keyframes without additional fine-tuning. In the second stage, we introduce an efficient model called MaskINT, which is built on non-autoregressive masked generative transformers and specializes in frame interpolation between the keyframes, benefiting from structural guidance provided by intermediate frames. Our comprehensive set of experiments illustrates the efficacy and efficiency of MaskINT when compared to other diffusion-based methodologies. This research offers a practical solution for text-based video editing and showcases the potential of non-autoregressive masked generative transformers in this domain.

Method

We propose to disentangle the text-based video editing into a two stage pipeline, that involves keyframes joint editing using existing image diffusion model and structure-aware frame interpolation with masked generative transformers trained on video only datasets.

We propose MaskINT to perform structure-aware frame interpolation, which is the pioneer work that explicitly introduces structure control into non-autoregressive generative transformers.

Experimental results demonstrate that our method achieves comparable performance with diffusion methods in terms of temporal consistency and alignment with text prompts, while providing 5-7 times faster inference times.

MaskINT Editing Results

"a wolf" "cute plastic pig money bank" "a panda" "a male lion on ice in snowy day"
"a man hikes in autumn" "two man play basketball under galaxy" "backview of three sculptures" "a blue airplane flies away in the dark night"
"black poodle dog runs" "a dog looking through car window, van gogh style" "frozen fish in water" "a boat sails on green grass"
Long Video Editing
"a car drives on ice road in snowy day" "two men play kite surf, van gogh style" "a man with black clothes on snowboard on sand in desert"

Comparisons

"a rhino walks on ice in snowy day" "a car drives on asphalt road in mountain van gogh style" "a man with black clothes on snowboard on sand in desert" "a man performs freestyle dance outdoors, van gogh style"
Tune-A-Video
Text2Video-Zero
TokenFlow
ControlVideo
MaskINT (ours)

BibTeX

@article{ma2023maskint,
  author    = {Ma, Haoyu and Mahdizadehaghdam, Shahin and Wu, Bichen and Fan, Zhipeng and Gu, Yuchao and Zhao, Wenliang and Shapira, Lior and Xie, Xiaohui},
  title     = {MaskINT: Video Editing via Interpolative Non-autoregressive Masked Transformers},
  journal   = {CVPR},
  year      = {2024},
}