Abuse Detection

Abuse detection research focuses on automatically identifying harmful content across various online platforms, aiming to mitigate the spread of abusive language and protect users. Current efforts concentrate on improving the accuracy and generalizability of detection models, employing techniques like multi-task learning, transformer-based architectures (e.g., BERT, mBERT), and incorporating multimodal data (text, audio, emotion) to enhance performance. This field is crucial for creating safer online environments and informing the development of responsible AI systems for content moderation, with implications for both social media platforms and other online services.

Papers