Effective Explanation

Effective explanation in artificial intelligence (AI) aims to make complex model decisions understandable and trustworthy to users, focusing on improving user experience and fostering appropriate trust. Current research emphasizes human-centered design of explanations, exploring various methods like counterfactual generation, attention mechanisms, and surrogate models (e.g., kernel machines) to achieve this goal, while also investigating the impact of explanation phrasing on perceived AI agency and responsibility. This field is crucial for responsible AI development, enabling better understanding of AI systems across diverse applications, from healthcare to autonomous vehicles, and promoting more reliable and ethical AI deployment.

Papers