Self Debiasing
Self-debiasing aims to mitigate biases—unfair prejudices learned from training data—in machine learning models, particularly large language models (LLMs) and other deep learning architectures. Current research focuses on developing methods to identify and reduce biases across various modalities (text, images, graphs), employing techniques like data augmentation, prompt engineering, and adversarial training within frameworks such as diffusion models and graph neural networks. Successfully addressing these biases is crucial for ensuring fairness, reliability, and ethical use of AI systems across diverse applications, ranging from healthcare and recruitment to social media and criminal justice.
Papers
February 19, 2024
February 3, 2024
January 23, 2024
January 16, 2024
December 13, 2023
November 17, 2023
November 1, 2023
October 27, 2023
October 25, 2023
October 24, 2023
October 12, 2023
September 16, 2023
August 3, 2023
July 20, 2023
July 14, 2023
May 24, 2023
May 23, 2023
May 22, 2023