Natural Language Inference
Natural Language Inference (NLI) focuses on determining the logical relationship between pairs of sentences, a crucial task for understanding and reasoning with natural language. Current research emphasizes improving NLI model robustness against adversarial attacks and misinformation, enhancing efficiency through techniques like layer pruning and domain adaptation, and developing more reliable evaluation methods that account for human judgment variability and address issues like hallucination in large language models. These advancements are significant for improving the accuracy and trustworthiness of various NLP applications, including question answering, text summarization, and fact verification, ultimately leading to more reliable and explainable AI systems.
Papers
Measuring and Improving Attentiveness to Partial Inputs with Counterfactuals
Yanai Elazar, Bhargavi Paranjape, Hao Peng, Sarah Wiegreffe, Khyathi Raghavi, Vivek Srikumar, Sameer Singh, Noah A. Smith
Clarify When Necessary: Resolving Ambiguity Through Interaction with LMs
Michael J. Q. Zhang, Eunsol Choi
Think While You Write: Hypothesis Verification Promotes Faithful Knowledge-to-Text Generation
Yifu Qiu, Varun Embar, Shay B. Cohen, Benjamin Han
Using Natural Language Explanations to Improve Robustness of In-context Learning
Xuanli He, Yuxiang Wu, Oana-Maria Camburu, Pasquale Minervini, Pontus Stenetorp
Semi-automatic Data Enhancement for Document-Level Relation Extraction with Distant Supervision from Large Language Models
Junpeng Li, Zixia Jia, Zilong Zheng
In Search of the Long-Tail: Systematic Generation of Long-Tail Inferential Knowledge via Logical Rule Guided Search
Huihan Li, Yuting Ning, Zeyi Liao, Siyuan Wang, Xiang Lorraine Li, Ximing Lu, Wenting Zhao, Faeze Brahman, Yejin Choi, Xiang Ren