NLU Model
Natural Language Understanding (NLU) models aim to enable computers to comprehend and interpret human language, facilitating applications like chatbots and voice assistants. Current research focuses on improving model robustness and generalizability, addressing issues like dataset bias and shortcut learning through techniques such as debiasing methods, in-context learning with large language models (LLMs), and efficient training strategies like data pruning. These advancements are crucial for building more reliable and accurate NLU systems, impacting various fields from software engineering to language learning, and driving the development of more sophisticated human-computer interaction.
Papers
NLU on Data Diets: Dynamic Data Subset Selection for NLP Classification Tasks
Jean-Michel Attendu, Jean-Philippe Corbeil
KNOW How to Make Up Your Mind! Adversarially Detecting and Alleviating Inconsistencies in Natural Language Explanations
Myeongjun Jang, Bodhisattwa Prasad Majumder, Julian McAuley, Thomas Lukasiewicz, Oana-Maria Camburu