Fair Inference
Fair inference in machine learning aims to mitigate biases in model outputs, ensuring equitable treatment across different demographic groups. Current research focuses on developing methods to achieve fairness during the inference stage, rather than solely relying on pre-processing or retraining, employing techniques like counterfactual comparisons and distributionally robust optimization within various model architectures, including graph neural networks and probabilistic graphical models. This work is crucial for addressing societal biases embedded in data and promoting fairness in high-stakes applications like healthcare and criminal justice, ultimately leading to more equitable and trustworthy AI systems.
Papers
November 9, 2023
October 3, 2023
September 20, 2023
September 15, 2022
January 10, 2022