Black Box Model
Black box models, characterized by their opaque internal workings, pose challenges in understanding their decision-making processes, hindering trust and accountability. Current research focuses on improving interpretability through methods like generalized additive models (GAMs) and surrogate models, as well as addressing vulnerabilities to adversarial attacks and biases through techniques such as explanation-driven attacks and robust defense mechanisms. This work is crucial for building trust in AI systems across various applications, from medical diagnosis to autonomous driving, by enhancing transparency and mitigating potential risks associated with unpredictable model behavior.
Papers
MOUNTAINEER: Topology-Driven Visual Analytics for Comparing Local Explanations
Parikshit Solunke, Vitoria Guardieiro, Joao Rulff, Peter Xenopoulos, Gromit Yeuk-Yin Chan, Brian Barr, Luis Gustavo Nonato, Claudio Silva
DiffExplainer: Unveiling Black Box Models Via Counterfactual Generation
Yingying Fang, Shuang Wu, Zihao Jin, Caiwen Xu, Shiyi Wang, Simon Walsh, Guang Yang