DeepMind Trains Artificial Intelligence to Optimise Video Compression

DeepMind’s MuZero algorithm has demonstrated its ability to teach itself to play games such as go, chess, shogi, and 57 classic Atari video games without ever being taught their rules, but rather by learning the rules from experimental playing. It now equals the performance of AlphaZero on chess and shogi.

A new paper, “MuZero with Self-competition for Rate Control in VP9 Video Compression”, describes using MuZero to optimise the parameters for a video compression algorithm, once again learning purely by experiment and self-play, and finds it performs 6.28% better than human hand-tuned parameters for the same algorithm. Here is the abstract.

Video streaming usage has seen a significant rise as entertainment, education, and business increasingly rely on online video. Optimizing video compression has the potential to increase access and quality of content to users, and reduce energy use and costs overall. In this paper, we present an application of the MuZero algorithm to the challenge of video compression. Specifically, we target the problem of learning a rate control policy to select the quantization parameters (QP) in the encoding process of libvpx, an open source VP9 video compression library widely used by popular video-on-demand (VOD) services. We treat this as a sequential decision making problem to maximize the video quality with an episodic constraint imposed by the target bitrate. Notably, we introduce a novel self-competition based reward mechanism to solve constrained RL with variable constraint satisfaction difficulty, which is challenging for existing constrained RL methods. We demonstrate that the MuZero-based rate control achieves an average 6.28% reduction in size of the compressed videos for the same delivered video quality level (measured as PSNR BD-rate) compared to libvpx’s two-pass VBR rate control policy, while having better constraint satisfaction behavior.