Self Debiasing
Self-debiasing aims to mitigate biases—unfair prejudices learned from training data—in machine learning models, particularly large language models (LLMs) and other deep learning architectures. Current research focuses on developing methods to identify and reduce biases across various modalities (text, images, graphs), employing techniques like data augmentation, prompt engineering, and adversarial training within frameworks such as diffusion models and graph neural networks. Successfully addressing these biases is crucial for ensuring fairness, reliability, and ethical use of AI systems across diverse applications, ranging from healthcare and recruitment to social media and criminal justice.
Papers
April 11, 2024
April 9, 2024
April 6, 2024
April 1, 2024
March 24, 2024
March 19, 2024
February 28, 2024
February 19, 2024
February 3, 2024
January 23, 2024
January 16, 2024
December 13, 2023
November 17, 2023
November 1, 2023
October 27, 2023
October 25, 2023
October 24, 2023