Explainable Image Quality
Explainable Image Quality Assessment (IQA) focuses on developing methods that not only score image quality but also explain *why* an image receives a particular score. Current research utilizes vision-language models, particularly those leveraging attributes or antonym prompt pairs, to identify and quantify image distortions (e.g., blur, poor lighting) contributing to the overall quality score. This increased transparency is crucial for various applications, including improving medical image analysis, optimizing text-to-image generation, and enhancing the efficiency of telemedicine consultations by providing actionable feedback on image quality issues. The ultimate goal is to create more reliable and trustworthy IQA systems, bridging the gap between automated assessment and human understanding.