Explainable Graph
Explainable graphs aim to enhance the transparency and trustworthiness of machine learning models, particularly those operating on graph-structured data, by providing understandable explanations for their predictions. Current research focuses on developing methods to accurately represent higher-order relationships within graphs and evaluating the effectiveness of different explanation designs, including qualitative and quantitative approaches, often using graph neural networks (GNNs) as the underlying predictive model. This work is crucial for building trust in AI systems across diverse applications, from recommender systems and automated driving to drug discovery, where understanding model decisions is paramount for safety, accountability, and effective human-AI collaboration.