Natural Language Processing Model
Natural Language Processing (NLP) focuses on enabling computers to understand, interpret, and generate human language. Current research emphasizes improving model performance on diverse tasks, including text classification, summarization, question answering, and even complex interactive scenarios like negotiations, often leveraging transformer-based architectures like BERT and its variants, along with large language models (LLMs). These advancements are driving significant improvements in various applications, from enhancing educational assessments and medical diagnosis to creating more effective search and recommendation systems. A key challenge remains ensuring fairness, robustness, and efficiency across different languages and domains, with ongoing efforts focused on mitigating biases and optimizing model resource usage.
Papers
Should We Attend More or Less? Modulating Attention for Fairness
Abdelrahman Zayed, Goncalo Mordido, Samira Shabanian, Sarath Chandar
On Bias and Fairness in NLP: Investigating the Impact of Bias and Debiasing in Language Models on the Fairness of Toxicity Detection
Fatma Elsafoury, Stamos Katsigiannis