Explainable Artificial Intelligence Method

Explainable Artificial Intelligence (XAI) methods aim to make the decision-making processes of complex machine learning models, such as deep neural networks, more transparent and understandable. Current research focuses on developing and evaluating various XAI techniques, including those based on feature attribution (e.g., SHAP, LIME), concept activation vectors, and visualization methods like Grad-CAM, across diverse applications like medical imaging, time series analysis, and remote sensing. The significance of XAI lies in building trust and facilitating the adoption of AI in high-stakes domains where understanding model predictions is crucial for accountability and reliable decision-making. This includes applications in healthcare, finance, and autonomous systems.

Papers