Auditing Framework
Auditing frameworks are being developed to systematically evaluate the fairness, accuracy, and trustworthiness of increasingly prevalent AI systems, particularly in high-stakes applications like hiring and chatbot development. Current research focuses on establishing standardized metrics and methodologies for assessing various aspects of AI, including bias detection, explanation verification, and the impact of different levels of access to the underlying algorithms (e.g., black-box vs. white-box). These frameworks aim to improve accountability and transparency in AI, fostering trust and mitigating potential harms by providing a structured approach to identifying and addressing issues before deployment and during operation.
Papers
October 7, 2024
September 3, 2024
July 18, 2024
April 25, 2023
April 21, 2023
May 9, 2022
February 15, 2022
January 23, 2022