Language Understanding
Language understanding research aims to enable computers to comprehend and process human language as effectively as humans do, focusing on tasks like natural language understanding (NLU) and generation (NLG). Current research emphasizes improving model robustness to noise, ambiguity, and biases, often employing transformer-based architectures, grammar induction techniques, and methods like retrieval-augmented generation and mixture-of-experts to enhance performance on diverse tasks. These advancements have significant implications for various applications, including improved chatbots, more effective machine translation, and enhanced accessibility for individuals with communication challenges.
Papers
M3Exam: A Multilingual, Multimodal, Multilevel Benchmark for Examining Large Language Models
Wenxuan Zhang, Sharifah Mahani Aljunied, Chang Gao, Yew Ken Chia, Lidong Bing
InfoPrompt: Information-Theoretic Soft Prompt Tuning for Natural Language Understanding
Junda Wu, Tong Yu, Rui Wang, Zhao Song, Ruiyi Zhang, Handong Zhao, Chaochao Lu, Shuai Li, Ricardo Henao
On Degrees of Freedom in Defining and Testing Natural Language Understanding
Saku Sugawara, Shun Tsugita
C-STS: Conditional Semantic Textual Similarity
Ameet Deshpande, Carlos E. Jimenez, Howard Chen, Vishvak Murahari, Victoria Graf, Tanmay Rajpurohit, Ashwin Kalyan, Danqi Chen, Karthik Narasimhan
Prompting Large Language Models for Counterfactual Generation: An Empirical Study
Yongqi Li, Mayi Xu, Xin Miao, Shen Zhou, Tieyun Qian
Bi-Drop: Enhancing Fine-tuning Generalization via Synchronous sub-net Estimation and Optimization
Shoujie Tong, Heming Xia, Damai Dai, Runxin Xu, Tianyu Liu, Binghuai Lin, Yunbo Cao, Zhifang Sui
DialogVCS: Robust Natural Language Understanding in Dialogue System Upgrade
Zefan Cai, Xin Zheng, Tianyu Liu, Xu Wang, Haoran Meng, Jiaqi Han, Gang Yuan, Binghuai Lin, Baobao Chang, Yunbo Cao