Interactive Explanation
Interactive explanation aims to improve user understanding and trust in complex machine learning models by allowing users to actively engage with the explanation process. Current research focuses on developing interactive interfaces that incorporate various explanation methods, such as feature attribution, argumentation frameworks, and natural language dialogues, often leveraging constraint logic programming or belief change theory for a principled approach. This field is crucial for enhancing the transparency and reliability of AI systems across diverse applications, from database querying to scientific literature recommendation, ultimately fostering greater user confidence and responsible AI development.
Papers
November 7, 2024
October 16, 2024
September 10, 2024
August 13, 2024
June 21, 2024
June 18, 2024
May 13, 2024
January 30, 2024
September 1, 2023
June 28, 2023
June 9, 2023
May 12, 2023
April 4, 2023
March 27, 2023
March 2, 2023
January 24, 2023
September 5, 2022
July 30, 2022