Audit Model
Model auditing aims to assess the fairness, transparency, and reliability of machine learning models, particularly in high-stakes applications like finance and criminal justice. Current research focuses on developing multi-layered auditing frameworks that examine model performance, governance practices of developers, and downstream applications, often incorporating techniques like federated learning and differential privacy to address data privacy concerns. These efforts are crucial for mitigating biases, ensuring accountability, and building trust in AI systems, impacting both the ethical deployment of AI and the development of robust auditing methodologies.
Papers
September 27, 2024
July 7, 2024
May 7, 2024
December 6, 2023
June 22, 2023
February 16, 2023
October 26, 2022
August 26, 2022
June 20, 2022