Abusive Language
Abusive language detection research aims to automatically identify and mitigate harmful online communication across various languages and platforms. Current efforts focus on improving model performance using techniques like transformer-based architectures (e.g., BERT, GPT), supervised contrastive learning, and active learning to enhance data efficiency, while also addressing challenges like temporal bias and the detection of implicit or nuanced abuse. This research is crucial for creating safer online environments and informing the development of effective moderation strategies, particularly for low-resource languages where resources are limited.
Papers
November 11, 2024
July 30, 2024
June 27, 2024
March 19, 2024
September 25, 2023
August 30, 2023
November 11, 2022
September 21, 2022
July 14, 2022
June 8, 2022
May 3, 2022