Model Multiplicity
Model multiplicity, the phenomenon where multiple equally-performing machine learning models produce different predictions for the same input, is a growing concern in various fields. Current research focuses on quantifying this inconsistency, developing methods to reconcile conflicting predictions for improved downstream decision-making, and addressing the ethical and legal implications of this arbitrariness, particularly concerning individual fairness and recourse. Understanding and mitigating model multiplicity is crucial for building trustworthy and reliable AI systems, especially in high-stakes applications like healthcare and finance, where consistent and explainable predictions are paramount.
Papers
October 23, 2024
July 4, 2024
May 30, 2024
May 28, 2024
December 22, 2023
November 24, 2023
July 5, 2023
June 23, 2023
May 4, 2023
March 10, 2023
February 22, 2023
February 14, 2023
December 6, 2022
September 4, 2022
June 17, 2022
April 21, 2022