Hard to Easy Inconsistency

Hard-to-easy inconsistency in artificial intelligence refers to the phenomenon where sophisticated models, capable of solving complex tasks, surprisingly fail on simpler, related ones. Current research focuses on identifying and mitigating this inconsistency across various AI applications, including large language models (LLMs), radiology report generation, and building energy rating assessments, often employing techniques like contrastive learning, explanation consistency evaluation, and multi-model ranking fusion. Understanding and addressing this inconsistency is crucial for improving the reliability and trustworthiness of AI systems across diverse domains, ultimately leading to more robust and dependable applications.

Papers