Instance Based Explanation
Instance-based explanation methods aim to clarify individual predictions made by machine learning models, enhancing transparency and trust. Current research focuses on developing and improving these methods across various model architectures, including deep generative models, diffusion models, and gradient boosting machines, exploring both spatial and frequency domains for image analysis and employing techniques like permutation importance and prototype-based approaches. This work is crucial for building more reliable and accountable AI systems, particularly in high-stakes domains like medicine and finance, where understanding model decisions is paramount.
Papers
November 7, 2024
August 24, 2024
July 19, 2024
July 16, 2024
July 2, 2024
June 13, 2024
May 24, 2024
December 9, 2023
June 14, 2023
March 15, 2023
February 11, 2023
January 23, 2023
January 5, 2023
October 31, 2022
October 30, 2022
July 30, 2022
July 26, 2022