Abusive Language Detection
Abusive language detection aims to automatically identify hateful, offensive, or threatening language in online text and speech, mitigating the harmful effects of online abuse. Current research focuses on improving model performance across diverse languages and platforms, employing techniques like multilingual and cross-lingual transfer learning, data augmentation, and transformer-based architectures such as BERT and XLM-RoBERTa. Addressing challenges like temporal bias, fairness concerns, and the subjective nature of "abuse" are crucial for developing robust and ethically sound detection systems with significant implications for online safety and content moderation.
Papers
July 30, 2024
March 4, 2024
November 15, 2023
November 3, 2023
September 25, 2023
May 23, 2023
November 30, 2022
November 11, 2022
September 21, 2022
July 14, 2022
July 8, 2022
June 16, 2022
June 2, 2022
April 26, 2022
April 6, 2022
January 3, 2022
November 27, 2021