Nationality Bias
Nationality bias in large language models (LLMs) is a significant area of research focusing on identifying and mitigating unfair or stereotypical representations of different nationalities in AI-generated text. Current studies utilize various methods, including reinforcement learning techniques to debias models and bias probing methods to detect and quantify biases across multiple languages and model architectures like BERT and GPT. Understanding and addressing this bias is crucial for ensuring fairness and equity in AI applications, particularly those impacting hiring, job recommendations, and public perception.
Papers
August 18, 2024
July 1, 2024
May 11, 2024
September 16, 2023
August 8, 2023
August 3, 2023
July 13, 2023
May 23, 2023
May 22, 2023