Published on Thu Apr 15 2021

Zooming SlowMo: An Efficient One-Stage Framework for Space-Time Video Super-Resolution

Xiaoyu Xiang, Yapeng Tian, Yulun Zhang, Yun Fu, Jan P. Allebach, Chenliang Xu
0
0
0
Abstract

In this paper, we address the space-time video super-resolution, which aims at generating a high-resolution (HR) slow-motion video from a low-resolution (LR) and low frame rate (LFR) video sequence. A na\"ive method is to decompose it into two sub-tasks: video frame interpolation (VFI) and video super-resolution (VSR). Nevertheless, temporal interpolation and spatial upscaling are intra-related in this problem. Two-stage approaches cannot fully make use of this natural property. Besides, state-of-the-art VFI or VSR deep networks usually have a large frame reconstruction module in order to obtain high-quality photo-realistic video frames, which makes the two-stage approaches have large models and thus be relatively time-consuming. To overcome the issues, we present a one-stage space-time video super-resolution framework, which can directly reconstruct an HR slow-motion video sequence from an input LR and LFR video. Instead of reconstructing missing LR intermediate frames as VFI models do, we temporally interpolate LR frame features of the missing LR frames capturing local temporal contexts by a feature temporal interpolation module. Extensive experiments on widely used benchmarks demonstrate that the proposed framework not only achieves better qualitative and quantitative performance on both clean and noisy LR frames but also is several times faster than recent state-of-the-art two-stage networks. The source code is released in https://github.com/Mukosame/Zooming-Slow-Mo-CVPR-2020 .

Wed Feb 26 2020
Computer Vision
Zooming Slow-Mo: Fast and Accurate One-Stage Space-Time Video Super-Resolution
The space-time video super-resolution task aims to generate a high-resolution slow-motion video from a low frame rate. Two-stage methods cannot fully take advantage of the natural property of video frame interpolation. The proposed method is more than three times faster than recent two-stage state-of-the-art methods.
0
0
0
Mon Dec 16 2019
Computer Vision
FISR: Deep Joint Frame Interpolation and Super-Resolution with A Multi-scale Temporal Loss
Super-resolution has been widely used to convert low-resolution legacy videos to high-resolution (HR) ones. However, it becomes easier for humans to notice motion artifacts (e.g. motion judder) in HR videos. To up-convert legacy videos for realistic applications,
0
0
0
Mon Apr 12 2021
Computer Vision
Efficient Space-time Video Super Resolution using Low-Resolution Flow and Mask Upsampling
This paper explores an efficient solution for Space-time Super-Resolution. It aims to generate High-resolution Slow-motion videos from Low Resolution and Low Frame rate videos. The model is lightweight and performs better than current state of the art models.
1
0
0
Mon Apr 06 2020
Computer Vision
Deep Space-Time Video Upsampling Networks
Video super-resolution (VSR) and frame interpolation (FI) are traditional computer vision problems. Performance has been improving by incorporating deep learning recently.
0
0
0
Mon Jan 06 2020
Computer Vision
Deep Video Super-Resolution using HR Optical Flow Estimation
Video super-resolution aims at generating a sequence of high-resolution(HR) frames with plausible and temporally consistent details from their low-resolution (LR) counterparts. The key challenge for video SR lies in the effective exploitation of temporal dependency between consecutive frames.
0
0
0
Fri Apr 05 2019
Computer Vision
Fast Spatio-Temporal Residual Network for Video Super-Resolution
Deep learning based video super-resolution (SR) methods have achieved promising performance. We present a novel novel fast spatio-temporal residual network (FSTRN) to adopt 3D convolutions for the video SR task in order to enhance the performance.
0
0
0