Interpretable Deep Learning
Interpretable deep learning aims to make the decision-making processes of deep neural networks transparent and understandable, addressing the "black box" problem that hinders trust and adoption in high-stakes applications. Current research focuses on developing novel architectures like concept bottleneck models and incorporating techniques such as attention mechanisms, counterfactual explanations, and Shapley values to provide insights into model predictions. This field is crucial for building reliable and trustworthy AI systems across various domains, from healthcare and finance to neuroimaging and environmental monitoring, by enabling better understanding, validation, and debugging of complex models.
Papers
September 22, 2024
August 23, 2024
August 20, 2024
August 7, 2024
June 10, 2024
May 30, 2024
May 26, 2024
April 16, 2024
April 1, 2024
March 25, 2024
March 18, 2024
February 1, 2024
January 30, 2024
January 11, 2024
December 4, 2023
November 2, 2023
October 16, 2023
October 4, 2023
October 3, 2023