Natural Language Inference
Natural Language Inference (NLI) focuses on determining the logical relationship between pairs of sentences, a crucial task for understanding and reasoning with natural language. Current research emphasizes improving NLI model robustness against adversarial attacks and misinformation, enhancing efficiency through techniques like layer pruning and domain adaptation, and developing more reliable evaluation methods that account for human judgment variability and address issues like hallucination in large language models. These advancements are significant for improving the accuracy and trustworthiness of various NLP applications, including question answering, text summarization, and fact verification, ultimately leading to more reliable and explainable AI systems.
Papers
Sources of Hallucination by Large Language Models on Inference Tasks
Nick McKenna, Tianyi Li, Liang Cheng, Mohammad Javad Hosseini, Mark Johnson, Mark Steedman
Evaluating and Modeling Attribution for Cross-Lingual Question Answering
Benjamin Muller, John Wieting, Jonathan H. Clark, Tom Kwiatkowski, Sebastian Ruder, Livio Baldini Soares, Roee Aharoni, Jonathan Herzig, Xinyi Wang
Can Large Language Models Capture Dissenting Human Voices?
Noah Lee, Na Min An, James Thorne
Abstract Meaning Representation-Based Logic-Driven Data Augmentation for Logical Reasoning
Qiming Bao, Alex Yuxuan Peng, Zhenyun Deng, Wanjun Zhong, Gael Gendron, Timothy Pistotti, Neset Tan, Nathan Young, Yang Chen, Yonghua Zhu, Paul Denny, Michael Witbrock, Jiamou Liu
OntoType: Ontology-Guided and Pre-Trained Language Model Assisted Fine-Grained Entity Typing
Tanay Komarlu, Minhao Jiang, Xuan Wang, Jiawei Han