Debiasing Method
Debiasing methods aim to mitigate biases learned by machine learning models, particularly large language models (LLMs), from skewed training data, improving fairness and generalizability. Current research focuses on techniques like data augmentation, weight masking, and prompt engineering, often applied to various architectures including BERT and other transformer-based models, as well as employing causal inference and unlearning approaches. Effective debiasing is crucial for ensuring fairness in AI applications across diverse domains, ranging from healthcare and criminal justice to natural language processing tasks, and is a significant area of ongoing investigation within the broader field of AI ethics.
Papers
September 27, 2024
September 23, 2024
August 23, 2024
August 18, 2024
August 13, 2024
July 27, 2024
July 24, 2024
May 24, 2024
May 16, 2024
May 15, 2024
March 27, 2024
March 21, 2024
March 2, 2024
February 13, 2024
February 12, 2024
February 5, 2024
December 13, 2023
December 6, 2023
November 28, 2023