Video Quality Assessment
Video quality assessment (VQA) aims to objectively measure how well a video is perceived, crucial for optimizing video compression, generation, and enhancement. Current research heavily focuses on developing robust no-reference and full-reference VQA models, often employing deep learning architectures like Swin Transformers and Vision Transformers, and incorporating multimodal information (text, visual, motion) for improved accuracy, particularly in challenging scenarios like AI-generated content and user-generated content. These advancements are vital for improving user experience in video streaming, enhancing video processing algorithms, and establishing standardized benchmarks for evaluating video quality across diverse platforms and applications.
Papers
AIM 2024 Challenge on Compressed Video Quality Assessment: Methods and Results
Maksim Smirnov, Aleksandr Gushchin, Anastasia Antsiferova, Dmitry Vatolin, Radu Timofte, Ziheng Jia, Zicheng Zhang, Wei Sun, Jiaying Qian, Yuqin Cao, Yinan Sun, Yuxin Zhu, Xiongkuo Min, Guangtao Zhai, Kanjar De, Qing Luo, Ao-Xiang Zhang, Peng Zhang, Haibo Lei, Linyan Jiang, Yaqing Li, Wenhui Meng, Zhenzhong Chen, Zhengxue Cheng, Jiahao Xiao, Jun Xu, Chenlong He, Qi Zheng, Ruoxi Zhu, Min Li, Yibo Fan, Zhengzhong Tu
E-Bench: Subjective-Aligned Benchmark Suite for Text-Driven Video Editing Quality Assessment
Shangkun Sun, Xiaoyu Liang, Songlin Fan, Wenxu Gao, Wei Gao
Enhancing Blind Video Quality Assessment with Rich Quality-aware Features
Wei Sun, Haoning Wu, Zicheng Zhang, Jun Jia, Zhichao Zhang, Linhan Cao, Qiubo Chen, Xiongkuo Min, Weisi Lin, Guangtao Zhai
RMT-BVQA: Recurrent Memory Transformer-based Blind Video Quality Assessment for Enhanced Video Content
Tianhao Peng, Chen Feng, Duolikun Danier, Fan Zhang, Benoit Vallade, Alex Mackin, David Bull