Self Debiasing
Self-debiasing aims to mitigate biases—unfair prejudices learned from training data—in machine learning models, particularly large language models (LLMs) and other deep learning architectures. Current research focuses on developing methods to identify and reduce biases across various modalities (text, images, graphs), employing techniques like data augmentation, prompt engineering, and adversarial training within frameworks such as diffusion models and graph neural networks. Successfully addressing these biases is crucial for ensuring fairness, reliability, and ethical use of AI systems across diverse applications, ranging from healthcare and recruitment to social media and criminal justice.
Papers
November 13, 2024
September 26, 2024
September 24, 2024
August 23, 2024
August 20, 2024
July 28, 2024
July 9, 2024
June 10, 2024
June 2, 2024
June 1, 2024
May 16, 2024
May 13, 2024
May 8, 2024
April 19, 2024
April 11, 2024
April 9, 2024
April 6, 2024
April 1, 2024
March 24, 2024