Video Quality Enhancement
Video quality enhancement (VQE) aims to improve the visual fidelity of compressed or degraded videos, primarily by mitigating artifacts introduced during compression or transmission. Current research heavily focuses on deep learning models, employing architectures like transformers and convolutional neural networks (including deformable convolutions) to leverage both spatial and temporal information within video frames, often incorporating coding priors or bitstream metadata for improved efficiency and accuracy. These advancements are significant for various applications, including online video streaming, video conferencing, and resource-constrained devices, by enabling higher quality video at lower bitrates or computational costs.
Papers
June 14, 2024
May 10, 2024
March 18, 2024
March 15, 2024
January 2, 2024
November 22, 2023
August 6, 2023
July 19, 2023
February 28, 2023
December 21, 2022
October 25, 2022
June 16, 2022
April 21, 2022
January 31, 2022
January 22, 2022