Self Debiasing
Self-debiasing aims to mitigate biases—unfair prejudices learned from training data—in machine learning models, particularly large language models (LLMs) and other deep learning architectures. Current research focuses on developing methods to identify and reduce biases across various modalities (text, images, graphs), employing techniques like data augmentation, prompt engineering, and adversarial training within frameworks such as diffusion models and graph neural networks. Successfully addressing these biases is crucial for ensuring fairness, reliability, and ethical use of AI systems across diverse applications, ranging from healthcare and recruitment to social media and criminal justice.
Papers
May 16, 2023
May 6, 2023
March 1, 2023
February 23, 2023
February 22, 2023
November 10, 2022
October 17, 2022
October 11, 2022
October 10, 2022
October 6, 2022
October 4, 2022
August 24, 2022
August 8, 2022
June 8, 2022
June 6, 2022
May 29, 2022
May 24, 2022
February 6, 2022