Model Multiplicity

Model multiplicity, the phenomenon where multiple equally-performing machine learning models produce different predictions for the same input, is a growing concern in various fields. Current research focuses on quantifying this inconsistency, developing methods to reconcile conflicting predictions for improved downstream decision-making, and addressing the ethical and legal implications of this arbitrariness, particularly concerning individual fairness and recourse. Understanding and mitigating model multiplicity is crucial for building trustworthy and reliable AI systems, especially in high-stakes applications like healthcare and finance, where consistent and explainable predictions are paramount.

Papers