Shapley Value
Shapley values, originating in game theory, provide a framework for assigning importance scores to individual features in complex models, aiming to explain how these features contribute to a model's predictions. Current research focuses on improving the computational efficiency of Shapley value estimation, particularly through advanced sampling techniques and leveraging model structures like graphs or trees, as well as addressing challenges like handling correlated features and ensuring stability and accuracy of estimations. This work has significant implications for explainable AI (XAI), enabling more transparent and trustworthy machine learning models across diverse applications, from personalized recommendations to forensic science.
Papers
Measuring the Effect of Training Data on Deep Learning Predictions via Randomized Experiments
Jinkun Lin, Anqi Zhang, Mathias Lecuyer, Jinyang Li, Aurojit Panda, Siddhartha Sen
Shapley-NAS: Discovering Operation Contribution for Neural Architecture Search
Han Xiao, Ziwei Wang, Zheng Zhu, Jie Zhou, Jiwen Lu