xAI Community
The xAI community focuses on developing and applying methods to make the decision-making processes of artificial intelligence models more transparent and understandable. Current research emphasizes improving the interpretability of various model architectures, including deep neural networks, through techniques like SHAP, LIME, and Grad-CAM, and exploring the use of large language models to translate technical explanations into user-friendly formats. This work is crucial for building trust in AI systems across diverse fields, from healthcare diagnostics and financial forecasting to engineering applications, and for ensuring responsible AI development and deployment.
Papers
XAIport: A Service Framework for the Early Adoption of XAI in AI Model Development
Zerui Wang, Yan Liu, Abishek Arumugam Thiruselvi, Abdelwahab Hamou-Lhadj
Revealing Vulnerabilities of Neural Networks in Parameter Learning and Defense Against Explanation-Aware Backdoors
Md Abdul Kadir, GowthamKrishna Addluri, Daniel Sonntag