Reasoning Shortcut
Reasoning shortcuts are unintended strategies employed by machine learning models, particularly large language models and neuro-symbolic systems, to solve tasks without genuine understanding or reasoning. Current research focuses on identifying and mitigating these shortcuts across various tasks, including natural language inference and machine reading comprehension, using techniques like concept-level confidence calibration and data augmentation with explicit proofs. Understanding and addressing reasoning shortcuts is crucial for improving the reliability, trustworthiness, and interpretability of AI systems, ultimately leading to more robust and dependable artificial intelligence.
Papers
BEARS Make Neuro-Symbolic Models Aware of their Reasoning Shortcuts
Emanuele Marconato, Samuele Bortolotti, Emile van Krieken, Antonio Vergari, Andrea Passerini, Stefano Teso
Investigating Multi-Hop Factual Shortcuts in Knowledge Editing of Large Language Models
Tianjie Ju, Yijin Chen, Xinwei Yuan, Zhuosheng Zhang, Wei Du, Yubin Zheng, Gongshen Liu