Interpretable Model
Interpretable models aim to create machine learning systems whose decision-making processes are transparent and understandable to humans, addressing the "black box" problem of many high-performing models. Current research focuses on developing inherently interpretable architectures like generalized additive models (GAMs), decision trees, rule lists, and symbolic regression, as well as post-hoc explanation methods for existing models, such as SHAP and LIME. This emphasis on interpretability is driven by the need for trust, accountability, and the ability to gain insights from complex data in fields ranging from healthcare and finance to scientific discovery, where understanding model decisions is crucial for effective application and responsible use. The development of more accurate and efficient methods for creating and evaluating interpretable models is a major focus of ongoing research.
Papers
Interpretable Symbolic Regression for Data Science: Analysis of the 2022 Competition
F. O. de Franca, M. Virgolin, M. Kommenda, M. S. Majumder, M. Cranmer, G. Espada, L. Ingelse, A. Fonseca, M. Landajuela, B. Petersen, R. Glatt, N. Mundhenk, C. S. Lee, J. D. Hochhalter, D. L. Randall, P. Kamienny, H. Zhang, G. Dick, A. Simon, B. Burlacu, Jaan Kasak, Meera Machado, Casper Wilstrup, W. G. La Cava
An Interpretable Loan Credit Evaluation Method Based on Rule Representation Learner
Zihao Chen, Xiaomeng Wang, Yuanjiang Huang, Tao Jia