XAI Solution
Explainable AI (XAI) aims to make the decision-making processes of artificial intelligence models more transparent and understandable, addressing concerns about the "black box" nature of many complex algorithms. Current research focuses on developing and evaluating model-agnostic XAI methods, particularly for deep learning models in diverse applications like cybersecurity, healthcare, and industrial settings, often employing techniques such as SHAP values and contextual importance measures. This work is crucial for building trust in AI systems, improving their reliability, and facilitating responsible AI development and deployment across various sectors.
Papers
Do We Need Explainable AI in Companies? Investigation of Challenges, Expectations, and Chances from Employees' Perspective
Katharina Weitz, Chi Tai Dang, Elisabeth André
Utilizing Explainable AI for improving the Performance of Neural Networks
Huawei Sun, Lorenzo Servadei, Hao Feng, Michael Stephan, Robert Wille, Avik Santra