Effective Explanation
Effective explanation in artificial intelligence (AI) aims to make complex model decisions understandable and trustworthy to users, focusing on improving user experience and fostering appropriate trust. Current research emphasizes human-centered design of explanations, exploring various methods like counterfactual generation, attention mechanisms, and surrogate models (e.g., kernel machines) to achieve this goal, while also investigating the impact of explanation phrasing on perceived AI agency and responsibility. This field is crucial for responsible AI development, enabling better understanding of AI systems across diverse applications, from healthcare to autonomous vehicles, and promoting more reliable and ethical AI deployment.
Papers
March 21, 2024
December 19, 2023
December 15, 2023
October 1, 2023
September 20, 2023
September 4, 2023
May 23, 2023
May 20, 2023
March 14, 2023
November 14, 2022
October 5, 2022
September 28, 2022
September 14, 2022
May 26, 2022
May 19, 2022
March 24, 2022
February 15, 2022
January 27, 2022