Interpretable Graph
Interpretable graph learning aims to develop graph-based machine learning models whose predictions are readily understandable and trustworthy, addressing the "black box" nature of many deep learning approaches. Current research focuses on developing inherently interpretable models, often employing graph neural networks (GNNs) with attention mechanisms or probabilistic logic programming, as well as post-hoc explanation methods that analyze existing GNNs to extract meaningful insights. This field is crucial for applications demanding transparency and accountability, such as drug discovery, healthcare, and scientific knowledge discovery, where understanding the model's reasoning is as important as its predictive accuracy.
Papers
October 2, 2024
August 30, 2024
June 12, 2024
March 26, 2024
March 12, 2024
February 26, 2024
January 3, 2024
October 24, 2023
July 10, 2023
June 18, 2023
June 2, 2023
May 31, 2023
April 22, 2023
July 8, 2022
March 21, 2022
March 2, 2022