XAI Design

Explainable AI (XAI) design focuses on creating methods to make the decision-making processes of AI systems understandable to humans, thereby increasing trust and facilitating responsible use. Current research emphasizes tailoring explanations to individual user needs, including personality traits, and developing frameworks for integrating XAI methods seamlessly into existing AI development pipelines, often employing techniques like attention maps and symbolic expressions for feature extraction and explanation generation. This work is crucial for building trustworthy AI systems across diverse domains, from healthcare to software engineering, by addressing the trade-off between explanation complexity and user comprehension and by proactively designing for potential system failures.

Papers