Language Understanding
Language understanding research aims to enable computers to comprehend and process human language as effectively as humans do, focusing on tasks like natural language understanding (NLU) and generation (NLG). Current research emphasizes improving model robustness to noise, ambiguity, and biases, often employing transformer-based architectures, grammar induction techniques, and methods like retrieval-augmented generation and mixture-of-experts to enhance performance on diverse tasks. These advancements have significant implications for various applications, including improved chatbots, more effective machine translation, and enhanced accessibility for individuals with communication challenges.
Papers
Contrastive Representation Learning for Cross-Document Coreference Resolution of Events and Entities
Benjamin Hsu, Graham Horwood
StreamingQA: A Benchmark for Adaptation to New Knowledge over Time in Question Answering Models
Adam Liška, Tomáš Kočiský, Elena Gribovskaya, Tayfun Terzi, Eren Sezener, Devang Agrawal, Cyprien de Masson d'Autume, Tim Scholtes, Manzil Zaheer, Susannah Young, Ellen Gilsenan-McMahon, Sophia Austin, Phil Blunsom, Angeliki Lazaridou
Vector-Quantized Input-Contextualized Soft Prompts for Natural Language Understanding
Rishabh Bhardwaj, Amrita Saha, Steven C. H. Hoi, Soujanya Poria
TreeMix: Compositional Constituency-based Data Augmentation for Natural Language Understanding
Le Zhang, Zichao Yang, Diyi Yang
DTW at Qur'an QA 2022: Utilising Transfer Learning with Transformers for Question Answering in a Low-resource Domain
Damith Premasiri, Tharindu Ranasinghe, Wajdi Zaghouani, Ruslan Mitkov