Explainable AI Model
Explainable AI (XAI) focuses on developing AI models whose decision-making processes are transparent and understandable, addressing the "black box" problem of many machine learning algorithms. Current research emphasizes techniques like SHAP values, regularization methods (e.g., SHIELD), and the integration of knowledge graphs to enhance model interpretability and improve performance across diverse applications, including healthcare, geotechnical engineering, and social media analysis. This work is crucial for building trust in AI systems, facilitating responsible development, and enabling informed decision-making in high-stakes domains where understanding the reasoning behind AI predictions is paramount.
Papers
October 13, 2024
May 29, 2024
April 24, 2024
April 3, 2024
November 8, 2023
September 28, 2023
June 15, 2023
December 29, 2022
July 26, 2022
May 20, 2022
April 8, 2022
March 20, 2022