Machine Bias
Machine bias refers to systematic and repeatable errors in algorithms that create unfair or discriminatory outcomes, often reflecting biases present in the data used to train them. Current research focuses on identifying and quantifying subtle biases in large language models and other machine learning classifiers, employing novel metrics and fairness-aware algorithms to assess and mitigate these issues across diverse applications like entity matching and occupation risk prediction. Understanding and addressing machine bias is crucial for ensuring fairness and accountability in AI systems, impacting both the development of more equitable algorithms and the responsible deployment of AI in various societal contexts.
Papers
May 23, 2024
July 6, 2023
September 6, 2022
July 7, 2022
April 19, 2022
March 24, 2022