Abusive Language Detection

Abusive language detection aims to automatically identify hateful, offensive, or threatening language in online text and speech, mitigating the harmful effects of online abuse. Current research focuses on improving model performance across diverse languages and platforms, employing techniques like multilingual and cross-lingual transfer learning, data augmentation, and transformer-based architectures such as BERT and XLM-RoBERTa. Addressing challenges like temporal bias, fairness concerns, and the subjective nature of "abuse" are crucial for developing robust and ethically sound detection systems with significant implications for online safety and content moderation.

Papers