Interpretable Function
Interpretable function research aims to create machine learning models whose decision-making processes are transparent and understandable, contrasting with the "black box" nature of many neural networks. Current efforts focus on developing novel architectures, such as those based on additive models, Laurent polynomials, or symbolic regression, and employing techniques like genetic programming and kernel regression to discover interpretable functions that approximate complex models. This pursuit is crucial for building trust in AI systems, enabling better model debugging and validation, and facilitating the application of machine learning in high-stakes domains like healthcare where understanding model behavior is paramount.
Papers
December 18, 2023
August 25, 2023
March 19, 2023
February 9, 2023