Shapley Value

Shapley values, originating in game theory, provide a framework for assigning importance scores to individual features in complex models, aiming to explain how these features contribute to a model's predictions. Current research focuses on improving the computational efficiency of Shapley value estimation, particularly through advanced sampling techniques and leveraging model structures like graphs or trees, as well as addressing challenges like handling correlated features and ensuring stability and accuracy of estimations. This work has significant implications for explainable AI (XAI), enabling more transparent and trustworthy machine learning models across diverse applications, from personalized recommendations to forensic science.

Papers