Explainability Method
Explainability methods aim to make the decision-making processes of complex machine learning models, particularly deep neural networks and large language models, more transparent and understandable. Current research focuses on developing and evaluating methods that assess the faithfulness and plausibility of explanations, often using techniques like counterfactual generation, attribution methods (e.g., SHAP, LIME, Grad-CAM), and concept-based approaches. This work is crucial for building trust in AI systems across diverse applications, from medical diagnosis to autonomous vehicles, by providing insights into model behavior and identifying potential biases.
Papers
January 2, 2024
December 20, 2023
November 22, 2023
November 18, 2023
November 14, 2023
November 12, 2023
November 9, 2023
October 25, 2023
October 23, 2023
October 5, 2023
October 3, 2023
September 28, 2023
August 7, 2023
July 18, 2023
July 7, 2023
June 27, 2023
June 20, 2023
June 11, 2023