High Explainability
High explainability in artificial intelligence (AI) aims to make the decision-making processes of complex models, such as large language models and deep neural networks, more transparent and understandable. Current research focuses on developing both intrinsic (built-in) and post-hoc (added after training) explainability methods, often employing techniques like attention mechanisms, feature attribution, and counterfactual examples to interpret model outputs across various modalities (text, images, audio). This pursuit is crucial for building trust in AI systems, particularly in high-stakes domains like medicine and finance, and for ensuring fairness, accountability, and responsible AI development.
Papers
SketchXAI: A First Look at Explainability for Human Sketches
Zhiyu Qu, Yulia Gryaditskaya, Ke Li, Kaiyue Pang, Tao Xiang, Yi-Zhe Song
Evaluating ChatGPT's Information Extraction Capabilities: An Assessment of Performance, Explainability, Calibration, and Faithfulness
Bo Li, Gexiang Fang, Yang Yang, Quansen Wang, Wei Ye, Wen Zhao, Shikun Zhang
Feature Reduction Method Comparison Towards Explainability and Efficiency in Cybersecurity Intrusion Detection Systems
Adam M. Lehavi, Seongtae Kim
TsSHAP: Robust model agnostic feature-based explainability for time series forecasting
Vikas C. Raykar, Arindam Jati, Sumanta Mukherjee, Nupur Aggarwal, Kanthi Sarpatwar, Giridhar Ganapavarapu, Roman Vaculin