XAI Research
Explainable AI (XAI) research aims to make the decision-making processes of complex AI models more transparent and understandable to humans, addressing concerns about "black box" systems. Current research focuses on developing and evaluating various explanation methods, including those based on gradients, attention mechanisms, and concept-based approaches, often benchmarked across diverse datasets and model architectures. This work is crucial for building trust in AI systems, improving their usability across various domains (healthcare, finance, etc.), and ensuring fairness and accountability in their applications. A significant trend is the shift towards user-centered design, emphasizing the importance of tailoring explanations to specific user needs and expertise levels.
Papers
Towards User-Focused Research in Training Data Attribution for Human-Centered Explainable AI
Elisa Nguyen, Johannes Bertram, Evgenii Kortukov, Jean Y. Song, Seong Joon Oh
Navigating the Maze of Explainable AI: A Systematic Approach to Evaluating Methods and Metrics
Lukas Klein, Carsten T. Lüth, Udo Schlegel, Till J. Bungert, Mennatallah El-Assady, Paul F. Jäger