Interactive Explanation

Interactive explanation aims to improve user understanding and trust in complex machine learning models by allowing users to actively engage with the explanation process. Current research focuses on developing interactive interfaces that incorporate various explanation methods, such as feature attribution, argumentation frameworks, and natural language dialogues, often leveraging constraint logic programming or belief change theory for a principled approach. This field is crucial for enhancing the transparency and reliability of AI systems across diverse applications, from database querying to scientific literature recommendation, ultimately fostering greater user confidence and responsible AI development.

Papers