Regional Bias
Regional bias in artificial intelligence models, stemming from skewed training data predominantly sourced from Western regions, is a significant concern impacting model accuracy and fairness. Current research focuses on quantifying this bias across various model types, including large language models and image classification models, and developing techniques to mitigate it, such as regularization methods in federated learning and proximity-informed calibration. Addressing regional bias is crucial for ensuring the equitable and reliable application of AI across diverse geographical contexts and preventing the perpetuation of existing societal inequalities.
Papers
April 3, 2024
March 12, 2024
February 27, 2024
February 5, 2024
December 5, 2023
October 26, 2023
June 7, 2023
December 20, 2022
December 8, 2022
November 5, 2022
October 31, 2022
January 20, 2022