High Explainability
High explainability in artificial intelligence (AI) aims to make the decision-making processes of complex models, such as large language models and deep neural networks, more transparent and understandable. Current research focuses on developing both intrinsic (built-in) and post-hoc (added after training) explainability methods, often employing techniques like attention mechanisms, feature attribution, and counterfactual examples to interpret model outputs across various modalities (text, images, audio). This pursuit is crucial for building trust in AI systems, particularly in high-stakes domains like medicine and finance, and for ensuring fairness, accountability, and responsible AI development.
Papers
November 8, 2022
November 3, 2022
October 31, 2022
October 24, 2022
October 21, 2022
October 18, 2022
October 14, 2022
October 13, 2022
October 11, 2022
October 9, 2022
October 2, 2022
September 24, 2022
September 22, 2022
September 16, 2022
September 14, 2022
September 12, 2022
September 7, 2022