Abusive Content

Abusive content detection on social media aims to identify and mitigate harmful online interactions, including hate speech and cyberbullying, focusing on improving the accuracy and fairness of automated moderation systems. Current research emphasizes developing robust models, often employing transformer-based architectures like BERT and multilingual adaptations, that can accurately classify abusive content across diverse languages and platforms, while addressing challenges like code-switching and the detection of implicit or nuanced forms of abuse. This field is crucial for fostering safer online environments and has significant implications for both the development of ethical AI and the mitigation of real-world harms associated with online abuse.

Papers