Paper ID: 2112.11313
On the Adversarial Robustness of Causal Algorithmic Recourse
Ricardo Dominguez-Olmedo, Amir-Hossein Karimi, Bernhard Schölkopf
Algorithmic recourse seeks to provide actionable recommendations for individuals to overcome unfavorable classification outcomes from automated decision-making systems. Recourse recommendations should ideally be robust to reasonably small uncertainty in the features of the individual seeking recourse. In this work, we formulate the adversarially robust recourse problem and show that recourse methods that offer minimally costly recourse fail to be robust. We then present methods for generating adversarially robust recourse for linear and for differentiable classifiers. Finally, we show that regularizing the decision-making classifier to behave locally linearly and to rely more strongly on actionable features facilitates the existence of adversarially robust recourse.
Submitted: Dec 21, 2021