Bias Measurement
Bias measurement in artificial intelligence focuses on quantifying and mitigating unfair biases present in models, particularly large language models (LLMs) and multimodal models like those generating images from text. Current research emphasizes developing robust and reliable bias metrics, often employing techniques like word embedding association tests and counterfactual analysis, while also exploring the influence of factors such as context length and template design on measurement accuracy. This work is crucial for ensuring fairness and trustworthiness in AI systems, impacting both the responsible development of AI technologies and the prevention of discriminatory outcomes in various applications.
Papers
October 26, 2024
October 3, 2024
September 24, 2024
June 25, 2024
June 6, 2024
May 20, 2024
April 4, 2024
March 21, 2024
August 1, 2023
June 30, 2023
April 26, 2023
February 27, 2023
December 20, 2022
November 24, 2022
November 7, 2022
October 20, 2022
October 9, 2022
June 3, 2022
May 18, 2022