Language Understanding
Language understanding research aims to enable computers to comprehend and process human language as effectively as humans do, focusing on tasks like natural language understanding (NLU) and generation (NLG). Current research emphasizes improving model robustness to noise, ambiguity, and biases, often employing transformer-based architectures, grammar induction techniques, and methods like retrieval-augmented generation and mixture-of-experts to enhance performance on diverse tasks. These advancements have significant implications for various applications, including improved chatbots, more effective machine translation, and enhanced accessibility for individuals with communication challenges.
Papers
Generalized Multiple Intent Conditioned Slot Filling
Harshil Shah, Arthur Wilcke, Marius Cobzarenco, Cristi Cobzarenco, Edward Challis, David Barber
Measuring and Mitigating Local Instability in Deep Neural Networks
Arghya Datta, Subhrangshu Nandi, Jingcheng Xu, Greg Ver Steeg, He Xie, Anoop Kumar, Aram Galstyan
PMC-LLaMA: Towards Building Open-source Language Models for Medicine
Chaoyi Wu, Weixiong Lin, Xiaoman Zhang, Ya Zhang, Yanfeng Wang, Weidi Xie
We're Afraid Language Models Aren't Modeling Ambiguity
Alisa Liu, Zhaofeng Wu, Julian Michael, Alane Suhr, Peter West, Alexander Koller, Swabha Swayamdipta, Noah A. Smith, Yejin Choi