Stereotypical Bias
Stereotypical bias in artificial intelligence, particularly in large language models (LLMs) and text-to-image generators, focuses on identifying and mitigating the perpetuation of harmful societal biases encoded within these systems. Current research investigates bias across various demographic categories (gender, race, ethnicity, etc.) using diverse datasets and evaluation metrics, often employing techniques like reinforcement learning and counterfactual data generation to debias models. This work is crucial for ensuring fairness and equity in AI applications, impacting not only the development of responsible AI but also our understanding of how biases are learned and propagated through machine learning systems.
Papers
October 29, 2024
October 23, 2024
October 17, 2024
October 2, 2024
September 30, 2024
September 18, 2024
September 17, 2024
August 18, 2024
August 13, 2024
June 30, 2024
June 27, 2024
June 25, 2024
May 31, 2024
April 18, 2024
April 8, 2024
February 16, 2024
February 6, 2024
January 28, 2024
January 21, 2024