Video Quality Enhancement

Video quality enhancement (VQE) aims to improve the visual fidelity of compressed or degraded videos, primarily by mitigating artifacts introduced during compression or transmission. Current research heavily focuses on deep learning models, employing architectures like transformers and convolutional neural networks (including deformable convolutions) to leverage both spatial and temporal information within video frames, often incorporating coding priors or bitstream metadata for improved efficiency and accuracy. These advancements are significant for various applications, including online video streaming, video conferencing, and resource-constrained devices, by enabling higher quality video at lower bitrates or computational costs.

Papers