Offensive Language
Offensive language detection aims to automatically identify hateful, abusive, and otherwise harmful language online, primarily to mitigate its negative societal impact. Current research focuses on improving the accuracy and robustness of detection models, particularly addressing challenges posed by multilingualism, code-mixing, implicit offensiveness, and adversarial attacks; transformer-based models and ensemble methods are prominent. This field is crucial for creating safer online environments and fostering more respectful digital interactions, driving advancements in natural language processing and impacting the design of content moderation systems.
Papers
November 10, 2024
October 21, 2024
October 11, 2024
September 18, 2024
July 28, 2024
June 18, 2024
June 4, 2024
April 17, 2024
March 27, 2024
March 20, 2024
March 4, 2024
February 5, 2024
February 3, 2024
January 30, 2024
January 18, 2024
December 18, 2023
December 10, 2023
December 6, 2023
December 4, 2023
November 25, 2023