Natural Language Inference
Natural Language Inference (NLI) focuses on determining the logical relationship between pairs of sentences, a crucial task for understanding and reasoning with natural language. Current research emphasizes improving NLI model robustness against adversarial attacks and misinformation, enhancing efficiency through techniques like layer pruning and domain adaptation, and developing more reliable evaluation methods that account for human judgment variability and address issues like hallucination in large language models. These advancements are significant for improving the accuracy and trustworthiness of various NLP applications, including question answering, text summarization, and fact verification, ultimately leading to more reliable and explainable AI systems.
Papers
DKE-Research at SemEval-2024 Task 2: Incorporating Data Augmentation with Generative Models and Biomedical Knowledge to Enhance Inference Robustness
Yuqi Wang, Zeqiang Wang, Wei Wang, Qi Chen, Kaizhu Huang, Anh Nguyen, Suparna De
TLDR at SemEval-2024 Task 2: T5-generated clinical-Language summaries for DeBERTa Report Analysis
Spandan Das, Vinay Samuel, Shahriar Noroozizadeh
SEME at SemEval-2024 Task 2: Comparing Masked and Generative Language Models on Natural Language Inference for Clinical Trials
Mathilde Aguiar, Pierre Zweigenbaum, Nona Naderi
Forget NLI, Use a Dictionary: Zero-Shot Topic Classification for Low-Resource Languages with Application to Luxembourgish
Fred Philippy, Shohreh Haddadan, Siwen Guo
On the Role of Summary Content Units in Text Summarization Evaluation
Marcel Nawrath, Agnieszka Nowak, Tristan Ratz, Danilo C. Walenta, Juri Opitz, Leonardo F. R. Ribeiro, João Sedoc, Daniel Deutsch, Simon Mille, Yixin Liu, Lining Zhang, Sebastian Gehrmann, Saad Mahamood, Miruna Clinciu, Khyathi Chandu, Yufang Hou
Evaluating Large Language Models Using Contrast Sets: An Experimental Approach
Manish Sanwal