Paper ID: 2412.16194

Multi-head attention debiasing and contrastive learning for mitigating Dataset Artifacts in Natural Language Inference

Karthik Sivakoti

While Natural Language Inference (NLI) models have achieved high performances on benchmark datasets, there are still concerns whether they truly capture the intended task, or largely exploit dataset artifacts. Through detailed analysis of the Stanford Natural Language Inference (SNLI) dataset, we have uncovered complex patterns of various types of artifacts and their interactions, leading to the development of our novel structural debiasing approach. Our fine-grained analysis of 9,782 validation examples reveals four major categories of artifacts: length-based patterns, lexical overlap, subset relationships, and negation patterns. Our multi-head debiasing architecture achieves substantial improvements across all bias categories: length bias accuracy improved from 86.03% to 90.06%, overlap bias from 91.88% to 93.13%, subset bias from 95.43% to 96.49%, and negation bias from 88.69% to 94.64%. Overall, our approach reduces the error rate from 14.19% to 10.42% while maintaining high performance on unbiased examples. Analysis of 1,026 error cases shows significant improvement in handling neutral relationships, traditionally one of the most challenging areas for NLI systems.

Submitted: Dec 16, 2024