Interpretable Model
Interpretable models aim to create machine learning systems whose decision-making processes are transparent and understandable to humans, addressing the "black box" problem of many high-performing models. Current research focuses on developing inherently interpretable architectures like generalized additive models (GAMs), decision trees, rule lists, and symbolic regression, as well as post-hoc explanation methods for existing models, such as SHAP and LIME. This emphasis on interpretability is driven by the need for trust, accountability, and the ability to gain insights from complex data in fields ranging from healthcare and finance to scientific discovery, where understanding model decisions is crucial for effective application and responsible use. The development of more accurate and efficient methods for creating and evaluating interpretable models is a major focus of ongoing research.
Papers
Explainable Spatio-Temporal GCNNs for Irregular Multivariate Time Series: Architecture and Application to ICU Patient Data
Óscar Escudero-Arnanz, Cristina Soguero-Ruiz, Antonio G. Marques
Abstracted Shapes as Tokens -- A Generalizable and Interpretable Model for Time-series Classification
Yunshi Wen, Tengfei Ma, Tsui-Wei Weng, Lam M. Nguyen, Anak Agung Julius