Explanation Strategy
Explanation strategies in Explainable AI (XAI) aim to improve user trust and satisfaction with AI systems by providing tailored explanations that meet diverse needs. Current research focuses on developing multi-shot approaches, employing various explanation types (e.g., counterfactuals, language-based descriptions) and adapting strategies based on user expertise, task complexity, and risk levels. This work is significant because effective explanation strategies are crucial for building trustworthy AI systems, particularly in high-stakes domains like healthcare and robotics, fostering greater human-AI collaboration and improving decision-making.
Papers
August 23, 2024
May 26, 2024
November 18, 2023
September 18, 2023
March 16, 2023
August 31, 2022