High Explainability
High explainability in artificial intelligence (AI) aims to make the decision-making processes of complex models, such as large language models and deep neural networks, more transparent and understandable. Current research focuses on developing both intrinsic (built-in) and post-hoc (added after training) explainability methods, often employing techniques like attention mechanisms, feature attribution, and counterfactual examples to interpret model outputs across various modalities (text, images, audio). This pursuit is crucial for building trust in AI systems, particularly in high-stakes domains like medicine and finance, and for ensuring fairness, accountability, and responsible AI development.
Papers
February 16, 2023
February 12, 2023
February 9, 2023
February 7, 2023
February 6, 2023
February 3, 2023
January 28, 2023
January 26, 2023
January 24, 2023
January 22, 2023
January 20, 2023
January 12, 2023
January 4, 2023
January 3, 2023
December 23, 2022
December 21, 2022
December 20, 2022
December 16, 2022