Robust Explanation
Robust explanation in machine learning aims to create explanations for model predictions that are reliable and consistent, even when faced with adversarial attacks or changes in input data. Current research focuses on improving the robustness of various explanation methods, including counterfactual explanations, saliency maps, and prototype-based approaches, often applied to deep neural networks and ensemble methods like random forests. This work is crucial for building trust in AI systems, particularly in high-stakes applications where understanding and verifying model decisions is paramount, and for mitigating the risks associated with unreliable or easily manipulated explanations.
Papers
August 20, 2024
July 10, 2024
May 29, 2024
March 20, 2024
March 9, 2024
February 29, 2024
February 13, 2024
December 11, 2023
October 30, 2023
September 9, 2023
September 4, 2023
July 8, 2023
July 5, 2023
April 28, 2023
April 13, 2023
March 29, 2023
December 28, 2022
December 18, 2022
December 16, 2022
December 12, 2022