Hard to Easy Inconsistency
Hard-to-easy inconsistency in artificial intelligence refers to the phenomenon where sophisticated models, capable of solving complex tasks, surprisingly fail on simpler, related ones. Current research focuses on identifying and mitigating this inconsistency across various AI applications, including large language models (LLMs), radiology report generation, and building energy rating assessments, often employing techniques like contrastive learning, explanation consistency evaluation, and multi-model ranking fusion. Understanding and addressing this inconsistency is crucial for improving the reliability and trustworthiness of AI systems across diverse domains, ultimately leading to more robust and dependable applications.
Papers
Trade-off Between Efficiency and Consistency for Removal-based Explanations
Yifan Zhang, Haowei He, Zhiquan Tan, Yang Yuan
SoK: Modeling Explainability in Security Analytics for Interpretability, Trustworthiness, and Usability
Dipkamal Bhusal, Rosalyn Shin, Ajay Ashok Shewale, Monish Kumar Manikya Veerabhadran, Michael Clifford, Sara Rampazzi, Nidhi Rastogi