Regional Bias

Regional bias in artificial intelligence models, stemming from skewed training data predominantly sourced from Western regions, is a significant concern impacting model accuracy and fairness. Current research focuses on quantifying this bias across various model types, including large language models and image classification models, and developing techniques to mitigate it, such as regularization methods in federated learning and proximity-informed calibration. Addressing regional bias is crucial for ensuring the equitable and reliable application of AI across diverse geographical contexts and preventing the perpetuation of existing societal inequalities.

Papers