Language Understanding
Language understanding research aims to enable computers to comprehend and process human language as effectively as humans do, focusing on tasks like natural language understanding (NLU) and generation (NLG). Current research emphasizes improving model robustness to noise, ambiguity, and biases, often employing transformer-based architectures, grammar induction techniques, and methods like retrieval-augmented generation and mixture-of-experts to enhance performance on diverse tasks. These advancements have significant implications for various applications, including improved chatbots, more effective machine translation, and enhanced accessibility for individuals with communication challenges.
Papers
Continuation KD: Improved Knowledge Distillation through the Lens of Continuation Optimization
Aref Jafari, Ivan Kobyzev, Mehdi Rezagholizadeh, Pascal Poupart, Ali Ghodsi
RPN: A Word Vector Level Data Augmentation Algorithm in Deep Learning for Language Understanding
Zhengqing Yuan, Xiaolong Zhang, Yue Wang, Xuecong Hou, Huiwen Xue, Zhuanzhe Zhao, Yongming Liu
Collaborating Heterogeneous Natural Language Processing Tasks via Federated Learning
Chenhe Dong, Yuexiang Xie, Bolin Ding, Ying Shen, Yaliang Li
SocioProbe: What, When, and Where Language Models Learn about Sociodemographics
Anne Lauscher, Federico Bianchi, Samuel Bowman, Dirk Hovy
ATCO2 corpus: A Large-Scale Dataset for Research on Automatic Speech Recognition and Natural Language Understanding of Air Traffic Control Communications
Juan Zuluaga-Gomez, Karel Veselý, Igor Szöke, Alexander Blatt, Petr Motlicek, Martin Kocour, Mickael Rigault, Khalid Choukri, Amrutha Prasad, Seyyed Saeed Sarfjoo, Iuliia Nigmatulina, Claudia Cevenini, Pavel Kolčárek, Allan Tart, Jan Černocký, Dietrich Klakow