Language Understanding
Language understanding research aims to enable computers to comprehend and process human language as effectively as humans do, focusing on tasks like natural language understanding (NLU) and generation (NLG). Current research emphasizes improving model robustness to noise, ambiguity, and biases, often employing transformer-based architectures, grammar induction techniques, and methods like retrieval-augmented generation and mixture-of-experts to enhance performance on diverse tasks. These advancements have significant implications for various applications, including improved chatbots, more effective machine translation, and enhanced accessibility for individuals with communication challenges.
Papers
Khayyam Challenge (PersianMMLU): Is Your LLM Truly Wise to The Persian Language?
Omid Ghahroodi, Marzia Nouri, Mohammad Vali Sanian, Alireza Sahebi, Doratossadat Dastgheib, Ehsaneddin Asgari, Mahdieh Soleymani Baghshah, Mohammad Hossein Rohban
RAR-b: Reasoning as Retrieval Benchmark
Chenghao Xiao, G Thomas Hudson, Noura Al Moubayed
LLMs' Reading Comprehension Is Affected by Parametric Knowledge and Struggles with Hypothetical Statements
Victoria Basmov, Yoav Goldberg, Reut Tsarfaty
GUIDE: Graphical User Interface Data for Execution
Rajat Chawla, Adarsh Jha, Muskaan Kumar, Mukunda NS, Ishaan Bhola
The Invalsi Benchmarks: measuring Linguistic and Mathematical understanding of Large Language Models in Italian
Giovanni Puccetti, Maria Cassese, Andrea Esuli
BLADE: Enhancing Black-box Large Language Models with Small Domain-Specific Models
Haitao Li, Qingyao Ai, Jia Chen, Qian Dong, Zhijing Wu, Yiqun Liu, Chong Chen, Qi Tian
mALBERT: Is a Compact Multilingual BERT Model Still Worth It?
Christophe Servan, Sahar Ghannay, Sophie Rosset