Video Quality Assessment
Video quality assessment (VQA) aims to objectively measure how well a video is perceived, crucial for optimizing video compression, generation, and enhancement. Current research heavily focuses on developing robust no-reference and full-reference VQA models, often employing deep learning architectures like Swin Transformers and Vision Transformers, and incorporating multimodal information (text, visual, motion) for improved accuracy, particularly in challenging scenarios like AI-generated content and user-generated content. These advancements are vital for improving user experience in video streaming, enhancing video processing algorithms, and establishing standardized benchmarks for evaluating video quality across diverse platforms and applications.
Papers
Enhancing Blind Video Quality Assessment with Rich Quality-aware Features
Wei Sun, Haoning Wu, Zicheng Zhang, Jun Jia, Zhichao Zhang, Linhan Cao, Qiubo Chen, Xiongkuo Min, Weisi Lin, Guangtao Zhai
RMT-BVQA: Recurrent Memory Transformer-based Blind Video Quality Assessment for Enhanced Video Content
Tianhao Peng, Chen Feng, Duolikun Danier, Fan Zhang, Benoit Vallade, Alex Mackin, David Bull
Cut-FUNQUE: An Objective Quality Model for Compressed Tone-Mapped High Dynamic Range Videos
Abhinau K. Venkataramanan, Cosmin Stejerean, Ioannis Katsavounidis, Hassene Tmar, Alan C. Bovik
PCQA: A Strong Baseline for AIGC Quality Assessment Based on Prompt Condition
Xi Fang, Weigang Wang, Xiaoxin Lv, Jun Yan