Selective Explanation
Selective explanation in AI focuses on providing users with tailored explanations, rather than exhaustive ones, to improve understanding and decision-making. Current research explores methods for generating contrastive explanations that highlight differences between AI and human reasoning, as well as techniques for selectively presenting explanations based on user needs or model accuracy, often employing amortized explainers or rule-based approaches. This work is significant because it addresses the limitations of overly verbose or inaccurate explanations, aiming to enhance human-AI collaboration and trust, particularly in high-stakes applications like autonomous driving and medical diagnosis.
Papers
October 5, 2024
August 16, 2024
June 18, 2024
May 29, 2024
April 12, 2024
February 28, 2024
July 2, 2023