Natural Language Inference
Natural Language Inference (NLI) focuses on determining the logical relationship between pairs of sentences, a crucial task for understanding and reasoning with natural language. Current research emphasizes improving NLI model robustness against adversarial attacks and misinformation, enhancing efficiency through techniques like layer pruning and domain adaptation, and developing more reliable evaluation methods that account for human judgment variability and address issues like hallucination in large language models. These advancements are significant for improving the accuracy and trustworthiness of various NLP applications, including question answering, text summarization, and fact verification, ultimately leading to more reliable and explainable AI systems.
Papers
ViANLI: Adversarial Natural Language Inference for Vietnamese
Tin Van Huynh, Kiet Van Nguyen, Ngan Luu-Thuy Nguyen
"Seeing the Big through the Small": Can LLMs Approximate Human Judgment Distributions on NLI from a Few Explanations?
Beiduo Chen, Xinpeng Wang, Siyao Peng, Robert Litschko, Anna Korhonen, Barbara Plank
IrokoBench: A New Benchmark for African Languages in the Age of Large Language Models
David Ifeoluwa Adelani, Jessica Ojo, Israel Abebe Azime, Jian Yun Zhuang, Jesujoba O. Alabi, Xuanli He, Millicent Ochieng, Sara Hooker, Andiswa Bukula, En-Shiun Annie Lee, Chiamaka Chukwuneke, Happy Buzaaba, Blessing Sibanda, Godson Kalipe, Jonathan Mukiibi, Salomon Kabongo, Foutse Yuehgoh, Mmasibidi Setaka, Lolwethu Ndolela, Nkiruka Odu, Rooweither Mabuya, Shamsuddeen Hassan Muhammad, Salomey Osei, Sokhar Samb, Tadesse Kebede Guge, Pontus Stenetorp
CSS: Contrastive Semantic Similarity for Uncertainty Quantification of LLMs
Shuang Ao, Stefan Rueger, Advaith Siddharthan