Implicit Bias
Implicit bias refers to unintended, often subtle, biases embedded within machine learning models, stemming from biases present in their training data. Current research focuses on detecting and mitigating these biases in various model architectures, particularly large language models (LLMs) and deep neural networks, using techniques like prompt engineering, fine-tuning, and Bayesian methods. Understanding and addressing implicit bias is crucial for ensuring fairness and equity in AI applications, impacting fields ranging from healthcare and criminal justice to education and hiring. The development of robust bias detection and mitigation strategies is a central goal of ongoing research.
Papers
November 30, 2023
November 10, 2023
November 5, 2023
November 1, 2023
October 29, 2023
October 26, 2023
September 27, 2023
August 31, 2023
August 24, 2023
August 21, 2023
July 14, 2023
July 7, 2023
June 30, 2023
June 20, 2023
June 14, 2023
June 10, 2023
June 1, 2023
May 25, 2023