Human Understandable Explanation

Human-understandable explanation (HUE) in artificial intelligence focuses on making the decision-making processes of complex models, such as deep learning networks and graph neural networks, transparent and interpretable to humans. Current research emphasizes developing methods that generate explanations in various formats, including textual concepts, logic rules, and visualizations of feature importance, often leveraging techniques like concept relevance propagation and counterfactual analysis. The goal is to improve trust, facilitate collaboration between AI experts and domain specialists, and enable responsible deployment of AI systems across diverse applications, particularly in high-stakes domains like healthcare and cybersecurity.

Papers