Natural Language Inference
Natural Language Inference (NLI) focuses on determining the logical relationship between pairs of sentences, a crucial task for understanding and reasoning with natural language. Current research emphasizes improving NLI model robustness against adversarial attacks and misinformation, enhancing efficiency through techniques like layer pruning and domain adaptation, and developing more reliable evaluation methods that account for human judgment variability and address issues like hallucination in large language models. These advancements are significant for improving the accuracy and trustworthiness of various NLP applications, including question answering, text summarization, and fact verification, ultimately leading to more reliable and explainable AI systems.
Papers
Zero-Shot Text Classification with Self-Training
Ariel Gera, Alon Halfon, Eyal Shnarch, Yotam Perlitz, Liat Ein-Dor, Noam Slonim
Effective Cross-Task Transfer Learning for Explainable Natural Language Inference with T5
Irina Bigoulaeva, Rachneet Sachdeva, Harish Tayyar Madabushi, Aline Villavicencio, Iryna Gurevych
MABEL: Attenuating Gender Bias using Textual Entailment Data
Jacqueline He, Mengzhou Xia, Christiane Fellbaum, Danqi Chen
BioNLI: Generating a Biomedical NLI Dataset Using Lexico-semantic Constraints for Adversarial Examples
Mohaddeseh Bastan, Mihai Surdeanu, Niranjan Balasubramanian
Leveraging Affirmative Interpretations from Negation Improves Natural Language Understanding
Md Mosharaf Hossain, Eduardo Blanco
Realistic Data Augmentation Framework for Enhancing Tabular Reasoning
Dibyakanti Kumar, Vivek Gupta, Soumya Sharma, Shuo Zhang
Lexical Generalization Improves with Larger Models and Longer Training
Elron Bandel, Yoav Goldberg, Yanai Elazar
Conformal Predictor for Improving Zero-shot Text Classification Efficiency
Prafulla Kumar Choubey, Yu Bai, Chien-Sheng Wu, Wenhao Liu, Nazneen Rajani