Sexism Identification
Sexism identification research focuses on automatically detecting sexist language in online text, aiming to create safer digital environments and understand the spread of harmful gender biases. Current efforts leverage large language models like BERT and XLM-RoBERTa, employing techniques such as fine-tuning, few-shot learning, and ensemble methods to improve classification accuracy across multiple languages. Challenges remain in addressing annotator bias in training data and the inherent subjectivity of sexism, highlighting the need for robust and nuanced approaches to accurately identify and mitigate this pervasive issue.
Papers
October 4, 2024
June 11, 2024
April 2, 2024
July 7, 2023
November 15, 2022
November 8, 2021