Offensive Language
Offensive language detection aims to automatically identify hateful, abusive, and otherwise harmful language online, primarily to mitigate its negative societal impact. Current research focuses on improving the accuracy and robustness of detection models, particularly addressing challenges posed by multilingualism, code-mixing, implicit offensiveness, and adversarial attacks; transformer-based models and ensemble methods are prominent. This field is crucial for creating safer online environments and fostering more respectful digital interactions, driving advancements in natural language processing and impacting the design of content moderation systems.
Papers
January 11, 2022
December 9, 2021
November 18, 2021
November 5, 2021
September 6, 2021