Implicit Bias
Implicit bias refers to unintended, often subtle, biases embedded within machine learning models, stemming from biases present in their training data. Current research focuses on detecting and mitigating these biases in various model architectures, particularly large language models (LLMs) and deep neural networks, using techniques like prompt engineering, fine-tuning, and Bayesian methods. Understanding and addressing implicit bias is crucial for ensuring fairness and equity in AI applications, impacting fields ranging from healthcare and criminal justice to education and hiring. The development of robust bias detection and mitigation strategies is a central goal of ongoing research.
Papers
January 4, 2025
December 17, 2024
December 5, 2024
December 2, 2024
November 25, 2024
November 11, 2024
November 2, 2024
November 1, 2024
October 29, 2024
October 16, 2024
October 14, 2024
October 13, 2024
October 5, 2024
October 3, 2024
September 23, 2024
September 20, 2024
September 2, 2024
August 20, 2024