Stereotype Content
Stereotype content research investigates how biases and stereotypes are represented and perpetuated within large language models (LLMs) and other AI systems, aiming to understand and mitigate their harmful societal impact. Current research focuses on identifying and quantifying these biases across various modalities (text, images), languages, and demographic groups, often employing techniques like adversarial attacks and explainable AI methods to analyze model behavior and develop mitigation strategies. This work is crucial for ensuring fairness and equity in AI applications, impacting fields ranging from education and healthcare to hiring and criminal justice, by promoting the development of less biased and more responsible AI systems.
Papers
Who is better at math, Jenny or Jingzhen? Uncovering Stereotypes in Large Language Models
Zara Siddique, Liam D. Turner, Luis Espinosa-Anke
Divine LLaMAs: Bias, Stereotypes, Stigmatization, and Emotion Representation of Religion in Large Language Models
Flor Miriam Plaza-del-Arco, Amanda Cercas Curry, Susanna Paoli, Alba Curry, Dirk Hovy