Explainable Face

Explainable Face Recognition (XFR) aims to make the decision-making processes of face recognition systems more transparent and understandable. Current research focuses on developing methods that generate visual explanations, such as saliency maps highlighting regions of faces influencing recognition decisions, often leveraging gradient-based techniques or frequency domain analysis alongside model-agnostic approaches. This work is crucial for building trust in face recognition technology, addressing biases, and improving the accountability and reliability of these systems in various applications, from security to personal identification.

Papers