Bias Evaluation
Bias evaluation in machine learning focuses on identifying and quantifying unfair biases in models' outputs, aiming to promote fairness and mitigate discriminatory outcomes. Current research emphasizes developing new metrics and benchmarks to assess bias across diverse model architectures, including large language models and computer vision systems, often employing techniques like counterfactual analysis and probing methods to detect subtle biases. This work is crucial for ensuring the responsible development and deployment of AI systems, impacting fields ranging from healthcare and criminal justice to social media and autonomous driving, where biased algorithms can have significant societal consequences.
Papers
January 28, 2023
October 31, 2022
October 21, 2022
September 14, 2022
May 31, 2022
May 19, 2022
May 1, 2022
April 21, 2022
January 28, 2022
December 14, 2021