Artificial Intelligence Audit

Artificial intelligence (AI) auditing systematically evaluates AI systems, their development processes, and their impacts against predefined criteria, aiming to ensure safety, fairness, and compliance. Current research emphasizes developing standardized auditing procedures, encompassing both technical assessments of model properties and process-oriented evaluations of governance structures, with a focus on determining appropriate levels of access to AI systems (ranging from black-box to white-box and beyond) for effective audits. This field is crucial for building trust in AI, informing policy decisions, and mitigating potential societal risks associated with increasingly prevalent AI systems across various sectors.

Papers