Language Understanding
Language understanding research aims to enable computers to comprehend and process human language as effectively as humans do, focusing on tasks like natural language understanding (NLU) and generation (NLG). Current research emphasizes improving model robustness to noise, ambiguity, and biases, often employing transformer-based architectures, grammar induction techniques, and methods like retrieval-augmented generation and mixture-of-experts to enhance performance on diverse tasks. These advancements have significant implications for various applications, including improved chatbots, more effective machine translation, and enhanced accessibility for individuals with communication challenges.
Papers
Back Transcription as a Method for Evaluating Robustness of Natural Language Understanding Models to Speech Recognition Errors
Marek Kubis, Paweł Skórzewski, Marcin Sowański, Tomasz Ziętkiewicz
An Early Evaluation of GPT-4V(ision)
Yang Wu, Shilong Wang, Hao Yang, Tian Zheng, Hongbo Zhang, Yanyan Zhao, Bing Qin
Pre-Trained Language Models Augmented with Synthetic Scanpaths for Natural Language Understanding
Shuwen Deng, Paul Prasse, David R. Reich, Tobias Scheffer, Lena A. Jäger
CoF-CoT: Enhancing Large Language Models with Coarse-to-Fine Chain-of-Thought Prompting for Multi-domain NLU Tasks
Hoang H. Nguyen, Ye Liu, Chenwei Zhang, Tao Zhang, Philip S. Yu
Chain-of-Thought Tuning: Masked Language Models can also Think Step By Step in Natural Language Understanding
Caoyun Fan, Jidong Tian, Yitian Li, Wenqing Chen, Hao He, Yaohui Jin
Reflection-Tuning: Data Recycling Improves LLM Instruction-Tuning
Ming Li, Lichang Chen, Jiuhai Chen, Shwai He, Heng Huang, Jiuxiang Gu, Tianyi Zhou
Adaptation with Self-Evaluation to Improve Selective Prediction in LLMs
Jiefeng Chen, Jinsung Yoon, Sayna Ebrahimi, Sercan O Arik, Tomas Pfister, Somesh Jha