Misogyny Detection

Misogyny detection research aims to automatically identify misogynistic language in text and multimedia, focusing on overcoming challenges posed by implicit biases, figurative language, and diverse online contexts. Current approaches leverage large language models, often enhanced with techniques like argumentation theory, word sense disambiguation, and graph-based contextualization to improve accuracy, particularly in low-resource languages and multimodal settings. This work is crucial for mitigating online harassment and hate speech, contributing to safer digital environments and advancing natural language processing capabilities in addressing societal biases.

Papers