Video Quality Assessment
Video quality assessment (VQA) aims to objectively measure how well a video is perceived, crucial for optimizing video compression, generation, and enhancement. Current research heavily focuses on developing robust no-reference and full-reference VQA models, often employing deep learning architectures like Swin Transformers and Vision Transformers, and incorporating multimodal information (text, visual, motion) for improved accuracy, particularly in challenging scenarios like AI-generated content and user-generated content. These advancements are vital for improving user experience in video streaming, enhancing video processing algorithms, and establishing standardized benchmarks for evaluating video quality across diverse platforms and applications.
Papers
HDR-ChipQA: No-Reference Quality Assessment on High Dynamic Range Videos
Joshua P. Ebenezer, Zaixi Shang, Yongjun Wu, Hai Wei, Sriram Sethuraman, Alan C. Bovik
Making Video Quality Assessment Models Robust to Bit Depth
Joshua P. Ebenezer, Zaixi Shang, Yongjun Wu, Hai Wei, Sriram Sethuraman, Alan C. Bovik