Hard to Easy Inconsistency
Hard-to-easy inconsistency in artificial intelligence refers to the phenomenon where sophisticated models, capable of solving complex tasks, surprisingly fail on simpler, related ones. Current research focuses on identifying and mitigating this inconsistency across various AI applications, including large language models (LLMs), radiology report generation, and building energy rating assessments, often employing techniques like contrastive learning, explanation consistency evaluation, and multi-model ranking fusion. Understanding and addressing this inconsistency is crucial for improving the reliability and trustworthiness of AI systems across diverse domains, ultimately leading to more robust and dependable applications.
Papers
November 13, 2024
November 10, 2024
October 31, 2024
October 24, 2024
October 17, 2024
October 11, 2024
October 5, 2024
September 16, 2024
August 29, 2024
August 23, 2024
August 8, 2024
July 18, 2024
July 3, 2024
June 18, 2024
May 31, 2024
May 30, 2024
May 29, 2024
May 9, 2024
April 14, 2024