Explainability Method
Explainability methods aim to make the decision-making processes of complex machine learning models, particularly deep neural networks and large language models, more transparent and understandable. Current research focuses on developing and evaluating methods that assess the faithfulness and plausibility of explanations, often using techniques like counterfactual generation, attribution methods (e.g., SHAP, LIME, Grad-CAM), and concept-based approaches. This work is crucial for building trust in AI systems across diverse applications, from medical diagnosis to autonomous vehicles, by providing insights into model behavior and identifying potential biases.
Papers
May 14, 2024
May 13, 2024
April 29, 2024
April 9, 2024
April 4, 2024
April 3, 2024
March 29, 2024
March 27, 2024
March 19, 2024
March 7, 2024
February 29, 2024
February 26, 2024
February 22, 2024
February 17, 2024
February 16, 2024
February 14, 2024
February 6, 2024
January 2, 2024
December 20, 2023