Interpretable Machine Learning Model
Interpretable machine learning (IML) aims to create predictive models that are not only accurate but also transparent, allowing users to understand how decisions are reached. Current research focuses on developing IML models that maintain high predictive performance while offering insights into feature importance through techniques like additive models (including variations such as higher-order and structural versions), explainable boosting machines, and rule-based systems. This field is crucial for building trust and facilitating responsible use of AI in high-stakes applications like healthcare and engineering, where understanding model behavior is paramount for decision-making and accountability.
Papers
December 18, 2023
August 15, 2023
February 18, 2023
January 17, 2023
September 30, 2022
May 20, 2022