Explainable Deep
Explainable Deep Learning (XDL) aims to make the decision-making processes of deep learning models more transparent and understandable, addressing the "black box" nature of these powerful but opaque systems. Current research focuses on developing methods to provide explanations for deep learning predictions, often incorporating techniques like attention mechanisms, surrogate models, and rule-based systems within architectures such as convolutional neural networks and recurrent neural networks. This field is crucial for building trust and facilitating the adoption of deep learning in high-stakes applications like medical diagnosis and financial modeling, where understanding the reasoning behind predictions is paramount.
Papers
October 29, 2024
September 24, 2023
August 29, 2023
July 3, 2023
May 22, 2023
December 30, 2022
November 9, 2022
July 18, 2022
May 10, 2022