Reference Image Quality Assessment
Reference image quality assessment (IQA) aims to automatically evaluate the perceptual quality of images, either with (full-reference) or without (no-reference) a pristine reference image. Current research heavily emphasizes no-reference IQA, focusing on developing lightweight, efficient deep learning models (often employing transformers and convolutional neural networks) that accurately predict human judgments of image quality, even on high-resolution images and mobile devices. These advancements are crucial for applications ranging from automated image selection and enhancement to optimizing image compression and improving the user experience in various image-based technologies.
Papers
Backdoor Attacks against No-Reference Image Quality Assessment Models via A Scalable Trigger
Yi Yu, Song Xia, Xun Lin, Wenhan Yang, Shijian Lu, Yap-peng Tan, Alex Kot
Light Field Image Quality Assessment With Auxiliary Learning Based on Depthwise and Anglewise Separable Convolutions
Qiang Qu, Xiaoming Chen, Vera Chung, Zhibo Chen
MSLIQA: Enhancing Learning Representations for Image Quality Assessment through Multi-Scale Learning
Nasim Jamshidi Avanaki, Abhijay Ghildiyal, Nabajeet Barman, Saman Zadtootaghaj
A Deep-Learning-Based Lable-free No-Reference Image Quality Assessment Metric: Application in Sodium MRI Denoising
Shuaiyu Yuan, Tristan Whitmarsh, Dimitri A Kessler, Otso Arponen, Mary A McLean, Gabrielle Baxter, Frank Riemer, Aneurin J Kennerley, William J Brackenbury, Fiona J Gilbert, Joshua D Kaggie