Audit Evidence

AI auditing aims to systematically evaluate AI systems for bias, fairness, robustness, and compliance with legal and ethical standards, ultimately ensuring responsible AI development and deployment. Current research focuses on developing auditing methodologies across various AI model types, including generative models and large language models, often employing techniques like contrastive learning, hypothesis testing, and formal concept analysis to assess model behavior and identify vulnerabilities. This field is crucial for establishing trust and accountability in AI systems, impacting both the scientific understanding of AI limitations and the practical implementation of ethical AI guidelines across diverse sectors.

Papers