Reference Image Quality Assessment
Reference image quality assessment (IQA) aims to automatically evaluate the perceptual quality of images, either with (full-reference) or without (no-reference) a pristine reference image. Current research heavily emphasizes no-reference IQA, focusing on developing lightweight, efficient deep learning models (often employing transformers and convolutional neural networks) that accurately predict human judgments of image quality, even on high-resolution images and mobile devices. These advancements are crucial for applications ranging from automated image selection and enhancement to optimizing image compression and improving the user experience in various image-based technologies.
Papers
MSLIQA: Enhancing Learning Representations for Image Quality Assessment through Multi-Scale Learning
Nasim Jamshidi Avanaki, Abhijay Ghildiyal, Nabajeet Barman, Saman Zadtootaghaj
A Deep-Learning-Based Lable-free No-Reference Image Quality Assessment Metric: Application in Sodium MRI Denoising
Shuaiyu Yuan, Tristan Whitmarsh, Dimitri A Kessler, Otso Arponen, Mary A McLean, Gabrielle Baxter, Frank Riemer, Aneurin J Kennerley, William J Brackenbury, Fiona J Gilbert, Joshua D Kaggie
Adaptive Image Quality Assessment via Teaching Large Multimodal Model to Compare
Hanwei Zhu, Haoning Wu, Yixuan Li, Zicheng Zhang, Baoliang Chen, Lingyu Zhu, Yuming Fang, Guangtao Zhai, Weisi Lin, Shiqi Wang
A study of why we need to reassess full reference image quality assessment with medical images
Anna Breger, Ander Biguri, Malena Sabaté Landman, Ian Selby, Nicole Amberg, Elisabeth Brunner, Janek Gröhl, Sepideh Hatamikia, Clemens Karner, Lipeng Ning, Sören Dittmer, Michael Roberts, AIX-COVNET Collaboration, Carola-Bibiane Schönlieb