Robust Counterfactuals
Robust counterfactual explanations aim to provide reliable and stable explanations for machine learning model predictions, ensuring that these explanations remain valid even under model perturbations or updates. Current research focuses on developing algorithms and metrics to generate such robust explanations, particularly for neural networks, tree-based ensembles, and graph neural networks, often employing techniques like diversity-based selection and stability measures. This work is crucial for enhancing the trustworthiness and reliability of explainable AI (XAI) systems, improving user understanding of model decisions and mitigating the risks associated with unreliable explanations in high-stakes applications.
Papers
November 14, 2024
May 31, 2024
April 30, 2024
December 11, 2023
May 19, 2023
April 24, 2023
July 6, 2022
May 27, 2022