Artificial Intelligence Bias
Artificial intelligence (AI) bias refers to systematic and repeatable errors in AI systems that produce unfair or discriminatory outcomes, often reflecting biases present in the data used to train them. Current research focuses on identifying and mitigating these biases across various AI applications, including healthcare, finance, and criminal justice, exploring techniques like fairness-aware algorithms and bias detection methods. Understanding and addressing AI bias is crucial for ensuring equitable and trustworthy AI systems, impacting both the development of responsible AI practices and the fairness of AI-driven decision-making in numerous societal contexts.
Papers
October 9, 2024
October 8, 2024
October 3, 2024
August 28, 2024
August 5, 2024
July 30, 2024
July 9, 2024
June 15, 2024
May 25, 2024
May 17, 2024
March 26, 2024
December 13, 2023
November 21, 2023
November 10, 2023
September 19, 2023
December 21, 2022
October 14, 2022
September 17, 2022
May 2, 2022