Video Compression
Video compression aims to reduce the size of video files without significant loss of visual quality, crucial for efficient storage and transmission. Current research heavily focuses on neural network-based approaches, particularly implicit neural representations (INRs) and autoencoder architectures, which learn to represent video data more efficiently than traditional methods like H.264 and H.265, often leveraging techniques like motion estimation and residual coding. These advancements are improving rate-distortion performance, decoding speed, and adaptability to various video types (e.g., screen content, blurry videos), impacting applications ranging from live streaming to autonomous driving and the metaverse. Ongoing work addresses challenges like computational complexity, ensuring fair performance evaluation across codecs, and improving perceptual quality alongside objective metrics.
Papers
Hierarchical B-frame Video Coding Using Two-Layer CANF without Motion Coding
David Alexandre, Hsueh-Ming Hang, Wen-Hsiao Peng
MMVC: Learned Multi-Mode Video Compression with Block-based Prediction Mode Selection and Density-Adaptive Entropy Coding
Bowen Liu, Yu Chen, Rakesh Chowdary Machineni, Shiyu Liu, Hun-Seok Kim