Paper ID: 2209.03499

Regulating eXplainable Artificial Intelligence (XAI) May Harm Consumers

Behnam Mohammadi, Nikhil Malik, Tim Derdenger, Kannan Srinivasan

Recent AI algorithms are black box models whose decisions are difficult to interpret. eXplainable AI (XAI) is a class of methods that seek to address lack of AI interpretability and trust by explaining to customers their AI decisions. The common wisdom is that regulating AI by mandating fully transparent XAI leads to greater social welfare. Our paper challenges this notion through a game theoretic model of a policy-maker who maximizes social welfare, firms in a duopoly competition that maximize profits, and heterogenous consumers. The results show that XAI regulation may be redundant. In fact, mandating fully transparent XAI may make firms and consumers worse off. This reveals a tradeoff between maximizing welfare and receiving explainable AI outputs. We extend the existing literature on method and substantive fronts, and we introduce and study the notion of XAI fairness, which may be impossible to guarantee even under mandatory XAI. Finally, the regulatory and managerial implications of our results for policy-makers and businesses are discussed, respectively.

Submitted: Sep 7, 2022