Prediction Sensitivity
Prediction sensitivity assesses how much a model's output changes based on variations in input features, particularly those related to protected attributes like race or gender. Current research focuses on developing metrics to quantify this sensitivity, linking it to established fairness notions, and applying it to diverse domains such as drug efficacy prediction and text classification, often employing machine learning models like variational autoencoders. This work is crucial for ensuring fairness and accountability in AI systems, enabling continuous monitoring and detection of bias in deployed models across various applications.
Papers
June 7, 2022
March 16, 2022