Explainable AI System

Explainable AI (XAI) systems aim to make the decision-making processes of artificial intelligence models transparent and understandable to humans. Current research emphasizes developing robust evaluation methods, including human-centered assessments and algorithmic validation of individual XAI components, and exploring diverse model architectures such as those based on fuzzy logic, active inference, and answer set programming to generate more interpretable explanations. The ultimate goal is to build trustworthy and reliable AI systems by bridging the gap between human and machine understanding, improving human-AI collaboration, and ensuring responsible AI deployment across various applications.

Papers