Debiasing Framework
Debiasing frameworks aim to mitigate biases in machine learning models, particularly large language models (LLMs) and vision-language models, which often perpetuate societal inequalities by reflecting biases present in their training data. Current research focuses on developing techniques like multi-LLM approaches, prompt engineering, and adversarial learning to neutralize biases related to protected attributes (gender, race, age) without sacrificing model performance. These advancements are crucial for ensuring fairness and ethical considerations in AI applications, impacting fields ranging from healthcare and finance to social sciences, where biased models can lead to discriminatory outcomes.
Papers
November 13, 2024
September 20, 2024
August 19, 2024
July 28, 2024
July 2, 2024
June 19, 2024
June 5, 2024
May 15, 2024
March 21, 2024
March 10, 2024
February 19, 2024
December 14, 2023
December 6, 2023
November 6, 2023
October 29, 2023
August 13, 2023
July 20, 2023
June 2, 2023
May 24, 2023