Natural Language Processing Model
Natural Language Processing (NLP) focuses on enabling computers to understand, interpret, and generate human language. Current research emphasizes improving model performance on diverse tasks, including text classification, summarization, question answering, and even complex interactive scenarios like negotiations, often leveraging transformer-based architectures like BERT and its variants, along with large language models (LLMs). These advancements are driving significant improvements in various applications, from enhancing educational assessments and medical diagnosis to creating more effective search and recommendation systems. A key challenge remains ensuring fairness, robustness, and efficiency across different languages and domains, with ongoing efforts focused on mitigating biases and optimizing model resource usage.
Papers
Transformer Models in Education: Summarizing Science Textbooks with AraBART, MT5, AraT5, and mBART
Sari Masri, Yaqeen Raddad, Fidaa Khandaqji, Huthaifa I. Ashqar, Mohammed Elhenawy
Bilingual Sexism Classification: Fine-Tuned XLM-RoBERTa and GPT-3.5 Few-Shot Learning
AmirMohammad Azadi, Baktash Ansari, Sina Zamani
Improving Commonsense Bias Classification by Mitigating the Influence of Demographic Terms
JinKyu Lee, Jihie Kim