Quantifying Bias
Quantifying bias in artificial intelligence models, particularly large language models and text-to-image generators, is a crucial area of research aiming to identify and measure unfairness in their outputs and training data. Current efforts focus on developing both intrinsic and extrinsic bias metrics, often employing techniques like analyzing word embeddings, leveraging large language models for bias detection, and creating controlled datasets for benchmarking. These advancements are vital for improving the fairness and reliability of AI systems across various applications, ranging from legal decision-making to healthcare and marketing, ultimately promoting responsible AI development.
Papers
October 3, 2024
September 14, 2024
August 29, 2024
June 17, 2024
March 9, 2024
December 20, 2023
September 11, 2023
August 23, 2023
April 26, 2023
December 19, 2022
November 19, 2022
October 7, 2022
May 23, 2022
December 14, 2021