Human Understandable Explanation
Human-understandable explanation (HUE) in artificial intelligence focuses on making the decision-making processes of complex models, such as deep learning networks and graph neural networks, transparent and interpretable to humans. Current research emphasizes developing methods that generate explanations in various formats, including textual concepts, logic rules, and visualizations of feature importance, often leveraging techniques like concept relevance propagation and counterfactual analysis. The goal is to improve trust, facilitate collaboration between AI experts and domain specialists, and enable responsible deployment of AI systems across diverse applications, particularly in high-stakes domains like healthcare and cybersecurity.
Papers
October 30, 2024
September 2, 2024
June 30, 2024
May 11, 2024
May 5, 2024
October 27, 2023
September 5, 2023
June 7, 2023
May 24, 2023
June 7, 2022