Selective Explanation

Selective explanation in AI focuses on providing users with tailored explanations, rather than exhaustive ones, to improve understanding and decision-making. Current research explores methods for generating contrastive explanations that highlight differences between AI and human reasoning, as well as techniques for selectively presenting explanations based on user needs or model accuracy, often employing amortized explainers or rule-based approaches. This work is significant because it addresses the limitations of overly verbose or inaccurate explanations, aiming to enhance human-AI collaboration and trust, particularly in high-stakes applications like autonomous driving and medical diagnosis.

Papers