Instance Based Explanation

Instance-based explanation methods aim to clarify individual predictions made by machine learning models, enhancing transparency and trust. Current research focuses on developing and improving these methods across various model architectures, including deep generative models, diffusion models, and gradient boosting machines, exploring both spatial and frequency domains for image analysis and employing techniques like permutation importance and prototype-based approaches. This work is crucial for building more reliable and accountable AI systems, particularly in high-stakes domains like medicine and finance, where understanding model decisions is paramount.

Papers