Language Understanding
Language understanding research aims to enable computers to comprehend and process human language as effectively as humans do, focusing on tasks like natural language understanding (NLU) and generation (NLG). Current research emphasizes improving model robustness to noise, ambiguity, and biases, often employing transformer-based architectures, grammar induction techniques, and methods like retrieval-augmented generation and mixture-of-experts to enhance performance on diverse tasks. These advancements have significant implications for various applications, including improved chatbots, more effective machine translation, and enhanced accessibility for individuals with communication challenges.
Papers
Improving Language Understanding from Screenshots
Tianyu Gao, Zirui Wang, Adithya Bhaskar, Danqi Chen
Science Checker Reloaded: A Bidirectional Paradigm for Transparency and Logical Reasoning
Loïc Rakotoson, Sylvain Massip, Fréjus A. A. Laleye
Dynamic Evaluation of Large Language Models by Meta Probing Agents
Kaijie Zhu, Jindong Wang, Qinlin Zhao, Ruochen Xu, Xing Xie