Interpretable Deep Learning
Interpretable deep learning aims to make the decision-making processes of deep neural networks transparent and understandable, addressing the "black box" problem that hinders trust and adoption in high-stakes applications. Current research focuses on developing novel architectures like concept bottleneck models and incorporating techniques such as attention mechanisms, counterfactual explanations, and Shapley values to provide insights into model predictions. This field is crucial for building reliable and trustworthy AI systems across various domains, from healthcare and finance to neuroimaging and environmental monitoring, by enabling better understanding, validation, and debugging of complex models.
Papers
July 21, 2023
July 14, 2023
July 13, 2023
July 12, 2023
June 5, 2023
April 8, 2023
February 11, 2023
November 29, 2022
July 11, 2022
May 8, 2022
March 31, 2022
March 7, 2022
February 25, 2022
December 1, 2021
November 25, 2021