Audit Evidence
AI auditing aims to systematically evaluate AI systems for bias, fairness, robustness, and compliance with legal and ethical standards, ultimately ensuring responsible AI development and deployment. Current research focuses on developing auditing methodologies across various AI model types, including generative models and large language models, often employing techniques like contrastive learning, hypothesis testing, and formal concept analysis to assess model behavior and identify vulnerabilities. This field is crucial for establishing trust and accountability in AI systems, impacting both the scientific understanding of AI limitations and the practical implementation of ethical AI guidelines across diverse sectors.
Papers
November 4, 2024
October 29, 2024
October 25, 2024
October 17, 2024
October 10, 2024
October 9, 2024
October 7, 2024
July 22, 2024
July 7, 2024
June 20, 2024
June 5, 2024
May 21, 2024
May 11, 2024
April 3, 2024
March 21, 2024
February 26, 2024
February 14, 2024