Explainable AI Model

Explainable AI (XAI) focuses on developing AI models whose decision-making processes are transparent and understandable, addressing the "black box" problem of many machine learning algorithms. Current research emphasizes techniques like SHAP values, regularization methods (e.g., SHIELD), and the integration of knowledge graphs to enhance model interpretability and improve performance across diverse applications, including healthcare, geotechnical engineering, and social media analysis. This work is crucial for building trust in AI systems, facilitating responsible development, and enabling informed decision-making in high-stakes domains where understanding the reasoning behind AI predictions is paramount.

Papers