Debiasing Framework
Debiasing frameworks aim to mitigate biases in machine learning models, particularly large language models (LLMs) and vision-language models, which often perpetuate societal inequalities by reflecting biases present in their training data. Current research focuses on developing techniques like multi-LLM approaches, prompt engineering, and adversarial learning to neutralize biases related to protected attributes (gender, race, age) without sacrificing model performance. These advancements are crucial for ensuring fairness and ethical considerations in AI applications, impacting fields ranging from healthcare and finance to social sciences, where biased models can lead to discriminatory outcomes.
Papers
May 24, 2023
March 8, 2023
February 5, 2023
November 20, 2022
November 11, 2022
October 11, 2022
September 18, 2022
April 28, 2022
April 26, 2022
March 30, 2022
March 14, 2022
January 10, 2022
December 6, 2021