Interpretable by Design
Interpretable-by-design (IbD) focuses on creating machine learning models and interfaces that are inherently transparent and understandable, addressing the "black box" problem of many complex AI systems. Current research emphasizes developing novel model architectures, such as those based on prototypes, additive models, and mixtures of experts, alongside user-centered interface designs that effectively communicate model behavior to non-experts. This approach is crucial for building trust in AI, particularly in high-stakes domains like healthcare and autonomous systems, and facilitates better model understanding, debugging, and responsible deployment.
Papers
October 7, 2024
September 23, 2024
August 11, 2024
August 10, 2024
May 4, 2024
April 18, 2024
April 3, 2024
March 21, 2024
February 5, 2024
November 19, 2023
September 21, 2023
July 28, 2023
July 7, 2023
April 17, 2023
November 4, 2022
June 13, 2022
June 9, 2022
December 30, 2021