Debiasing Method
Debiasing methods aim to mitigate biases learned by machine learning models, particularly large language models (LLMs), from skewed training data, improving fairness and generalizability. Current research focuses on techniques like data augmentation, weight masking, and prompt engineering, often applied to various architectures including BERT and other transformer-based models, as well as employing causal inference and unlearning approaches. Effective debiasing is crucial for ensuring fairness in AI applications across diverse domains, ranging from healthcare and criminal justice to natural language processing tasks, and is a significant area of ongoing investigation within the broader field of AI ethics.
Papers
November 17, 2023
October 27, 2023
October 23, 2023
October 19, 2023
October 16, 2023
September 23, 2023
September 16, 2023
August 21, 2023
August 13, 2023
July 4, 2023
May 23, 2023
May 22, 2023
May 6, 2023
April 17, 2023
February 28, 2023
February 6, 2023
January 10, 2023
December 1, 2022
November 10, 2022