Misogyny Detection
Misogyny detection research aims to automatically identify misogynistic language in text and multimedia, focusing on overcoming challenges posed by implicit biases, figurative language, and diverse online contexts. Current approaches leverage large language models, often enhanced with techniques like argumentation theory, word sense disambiguation, and graph-based contextualization to improve accuracy, particularly in low-resource languages and multimodal settings. This work is crucial for mitigating online harassment and hate speech, contributing to safer digital environments and advancing natural language processing capabilities in addressing societal biases.
Papers
September 4, 2024
April 3, 2024
March 9, 2024
November 15, 2023
June 27, 2023
May 23, 2023
November 16, 2022
May 29, 2022
April 13, 2022
March 27, 2022