Robust Recourse

Robust recourse in algorithmic decision-making focuses on generating actionable recommendations that help individuals overturn unfavorable predictions, even when facing uncertainties in feature values or model changes. Current research emphasizes developing methods that are robust to noisy implementations of recommended actions, incorporate user preferences regarding feature modification costs, and account for model drift and data distribution shifts, often employing techniques like Markov Decision Processes, tree-based models, and adversarial training. This field is crucial for ensuring fairness and transparency in AI systems, promoting user agency, and mitigating the potential for algorithmic bias and discrimination.

Papers