Natural Language Understanding Task
Natural Language Understanding (NLU) focuses on enabling computers to comprehend and interpret human language, aiming to bridge the gap between human communication and machine processing. Current research emphasizes improving model robustness and fairness, particularly addressing biases against dialects like African American Vernacular English and handling negation effectively, often through techniques like prompt engineering, data augmentation (e.g., expanding abstract descriptions), and reinforcement learning with label-sensitive rewards. These advancements are crucial for building more inclusive and reliable NLU systems with applications ranging from improved automated essay scoring to more accurate medical diagnosis support.
Papers
BioMistral-NLU: Towards More Generalizable Medical Language Understanding through Instruction Tuning
Yujuan Velvin Fu, Giridhar Kaushik Ramachandran, Namu Park, Kevin Lybarger, Fei Xia, Ozlem Uzuner, Meliha Yetisgen
Task Calibration: Calibrating Large Language Models on Inference Tasks
Yingjie Li, Yun Luo, Xiaotian Xie, Yue Zhang