Fairness Verification
Fairness verification in machine learning aims to ensure that algorithms do not perpetuate or exacerbate existing societal biases. Current research focuses on developing methods to verify fairness across various model architectures, including neural networks and tree-based classifiers, addressing challenges like data scarcity, domain shifts, and the inherent complexities of model decision-making processes. These efforts are crucial for building trustworthy AI systems and mitigating the potential for discriminatory outcomes in high-stakes applications such as loan applications or criminal justice.
Papers
January 27, 2024
December 13, 2023
October 17, 2023
January 29, 2023
December 8, 2022
November 14, 2022
September 27, 2022