Model Explanation
Model explanation, or explainable AI (XAI), aims to make the decision-making processes of complex machine learning models transparent and understandable. Current research focuses on developing and evaluating various explanation methods, including those based on feature importance (e.g., SHAP, LIME), prototypes, and neural pathways, often applied to deep learning models (e.g., CNNs, Vision Transformers) and large language models (LLMs). This field is crucial for building trust in AI systems, improving model development and debugging, and mitigating potential privacy risks associated with model transparency.
Papers
April 4, 2024
April 3, 2024
March 31, 2024
March 16, 2024
March 2, 2024
February 20, 2024
February 15, 2024
January 25, 2024
December 30, 2023
December 20, 2023
December 13, 2023
December 10, 2023
December 9, 2023
November 16, 2023
November 13, 2023
November 3, 2023
October 31, 2023
October 19, 2023
October 18, 2023