Explainable AI System
Explainable AI (XAI) systems aim to make the decision-making processes of artificial intelligence models transparent and understandable to humans. Current research emphasizes developing robust evaluation methods, including human-centered assessments and algorithmic validation of individual XAI components, and exploring diverse model architectures such as those based on fuzzy logic, active inference, and answer set programming to generate more interpretable explanations. The ultimate goal is to build trustworthy and reliable AI systems by bridging the gap between human and machine understanding, improving human-AI collaboration, and ensuring responsible AI deployment across various applications.
Papers
October 18, 2024
August 23, 2024
March 19, 2024
August 30, 2023
June 6, 2023
May 12, 2023
May 10, 2023
January 4, 2023
October 22, 2022
July 19, 2022
May 25, 2022
April 3, 2022