Abusive Language

Abusive language detection research aims to automatically identify and mitigate harmful online communication across various languages and platforms. Current efforts focus on improving model performance using techniques like transformer-based architectures (e.g., BERT, GPT), supervised contrastive learning, and active learning to enhance data efficiency, while also addressing challenges like temporal bias and the detection of implicit or nuanced abuse. This research is crucial for creating safer online environments and informing the development of effective moderation strategies, particularly for low-resource languages where resources are limited.

Papers