Stereotypical Bias
Stereotypical bias in artificial intelligence, particularly in large language models (LLMs) and text-to-image generators, focuses on identifying and mitigating the perpetuation of harmful societal biases encoded within these systems. Current research investigates bias across various demographic categories (gender, race, ethnicity, etc.) using diverse datasets and evaluation metrics, often employing techniques like reinforcement learning and counterfactual data generation to debias models. This work is crucial for ensuring fairness and equity in AI applications, impacting not only the development of responsible AI but also our understanding of how biases are learned and propagated through machine learning systems.
Papers
November 15, 2023
October 10, 2023
October 7, 2023
September 15, 2023
July 14, 2023
June 1, 2023
May 29, 2023
May 23, 2023
May 12, 2023
March 16, 2023
January 11, 2023
November 8, 2022
November 7, 2022