Debiasing Method
Debiasing methods aim to mitigate biases learned by machine learning models, particularly large language models (LLMs), from skewed training data, improving fairness and generalizability. Current research focuses on techniques like data augmentation, weight masking, and prompt engineering, often applied to various architectures including BERT and other transformer-based models, as well as employing causal inference and unlearning approaches. Effective debiasing is crucial for ensuring fairness in AI applications across diverse domains, ranging from healthcare and criminal justice to natural language processing tasks, and is a significant area of ongoing investigation within the broader field of AI ethics.
Papers
November 10, 2022
November 2, 2022
October 25, 2022
October 14, 2022
October 11, 2022
July 6, 2022
June 1, 2022
May 29, 2022
May 12, 2022
May 5, 2022
May 4, 2022
April 30, 2022
March 24, 2022