Implicit Bias
Implicit bias refers to unintended, often subtle, biases embedded within machine learning models, stemming from biases present in their training data. Current research focuses on detecting and mitigating these biases in various model architectures, particularly large language models (LLMs) and deep neural networks, using techniques like prompt engineering, fine-tuning, and Bayesian methods. Understanding and addressing implicit bias is crucial for ensuring fairness and equity in AI applications, impacting fields ranging from healthcare and criminal justice to education and hiring. The development of robust bias detection and mitigation strategies is a central goal of ongoing research.
Papers
May 24, 2023
May 19, 2023
May 16, 2023
March 14, 2023
March 7, 2023
March 2, 2023
January 30, 2023
December 28, 2022
December 5, 2022
October 25, 2022
October 13, 2022
October 6, 2022
September 30, 2022
September 29, 2022
August 26, 2022
July 8, 2022
June 12, 2022
May 18, 2022
February 27, 2022