Explanation Framework

Explanation frameworks aim to make the decision-making processes of complex machine learning models, particularly in natural language processing and graph neural networks, more transparent and understandable. Current research emphasizes interactive and customizable explanations, moving beyond static outputs to allow users to explore model predictions through dialogue, varying levels of detail, and controlled feature attribution. This focus on user-centered design improves model interpretability, fostering trust and facilitating better understanding of model behavior in diverse applications, ranging from authorship verification to scientific literature recommendation.

Papers