Explainability Problem
The explainability problem in machine learning focuses on understanding how and why complex models, particularly deep learning models like transformers and autoencoders, arrive at their predictions. Current research emphasizes developing methods to generate interpretable explanations, often using techniques like Layer-wise Relevance Propagation (LRP), attention maps, and concept-based attribution, while also analyzing the inherent complexities of explanation generation through parameterized complexity analysis. Addressing this problem is crucial for building trust in AI systems, improving model debugging and refinement, and ensuring responsible deployment across various applications, including medicine and natural language processing.
Papers
October 9, 2024
October 7, 2024
July 22, 2024
February 28, 2024
November 1, 2023
July 12, 2023
June 8, 2023
May 17, 2023
May 11, 2023
March 21, 2023
March 18, 2023
March 16, 2023
January 24, 2023
September 12, 2022
September 8, 2022
September 3, 2022