Abuse Detection
Abuse detection research focuses on automatically identifying harmful content across various online platforms, aiming to mitigate the spread of abusive language and protect users. Current efforts concentrate on improving the accuracy and generalizability of detection models, employing techniques like multi-task learning, transformer-based architectures (e.g., BERT, mBERT), and incorporating multimodal data (text, audio, emotion) to enhance performance. This field is crucial for creating safer online environments and informing the development of responsible AI systems for content moderation, with implications for both social media platforms and other online services.
Papers
September 9, 2024
July 30, 2024
June 27, 2024
April 17, 2024
March 8, 2024
July 31, 2023
May 23, 2023
March 10, 2023
February 17, 2023
November 30, 2022
October 6, 2022
September 28, 2022
June 16, 2022
April 19, 2022
April 5, 2022
April 3, 2022
March 19, 2022
March 4, 2022
February 16, 2022