Efficient Neural Video Representation with Temporally Coherent Modulation

Seungjun Shin*, Suji Kim*, Dokwan Oh
Samsung Advanced Institute of Technology
ECCV 2024(Oral)

*equal contribution

NVTM: Neural Video representation with Temporally coherent Modulation

NVTM is a computing-efficient (fast encoding speed) and parameter-efficient (high reconstruction quality) INR framework for videos by taking into account the dynamic characteristics of videos. The key idea is utilizing a series of 2D grid-types to represent videos by employing the same modulation to the corresponding pixels.

Abstract

Implicit neural representations (INR) has found successful applications across diverse domains. To employ INR in real-life, it is important to speed up training. In the field of INR for video applications, the state-of-the-art approach employs grid-type parametric encoding and successfully achieves a faster encoding speed in comparison to its predecessors. However, the grid usage, which does not consider the video’s dynamic nature, leads to redundant use of trainable parameters. As a result, it has significantly lower parameter efficiency and higher bitrate compared to NeRV-style methods that do not use a parametric encoding. To address the problem, we propose Neural Video representation with Temporally coherent Modulation (NVTM), a novel framework that can capture dynamic characteristics of video. By decomposing the spatio-temporal 3D video data into a set of 2D grids with flow information, NVTM enables learning video representation rapidly and uses parameter efficiently. Our framework enables to process temporally corresponding pixels at once, resulting in the fastest encoding speed for a reasonable video quality, especially when compared to the NeRV-style method, with a speed increase of over 3 times. Also, it remarks an average of 1.54dB/0.019 improvements in PSNR/LPIPS on UVG (Dynamic) (even with 10% fewer parameters) and an average of 1.84dB/0.013 improvements in PSNR/LPIPS on MCL-JCV (Dynamic), compared to previous grid-type works. By expanding this to compression tasks, we demonstrate comparable performance to video compression standards (H.264, HEVC) and recent INR approaches for video compression. Additionally, we perform extensive experiments demonstrating the superior performance of our algorithm across diverse tasks, encompassing super resolution, frame interpolation and video inpainting.

Methodology

architecture

NVTM generates the same modulation latent for temporally correlated pixels between consecutive frames, and the latent is used to modulate the base network. To obtain this latent, 1) input video is split into GOP units, 2) network F generates an alignment flow to transform 3D coordinate to specific time in k-th GOP unit, 3) 2D aligned coordinated is obtained by adding (x,y) and the alignment flow. 4) The temporally coherent latent is extracted from the latent grid Gk using normalized coordinate. Following the process, the temporally correlated 3D coordinates are mapped to the same 2D coordinate, thereby ensuring they share the same modulation latent representation. This shared modulation helps in the fast and efficient learning of video representation.

Fast Encoding Speed

Train after 1 minute.
(Bosphorus sequence from UVG)

Video Frame Interpolation

Video Inpainting