Paper ID: 2408.00786

Whether to trust: the ML leap of faith

Tory Frame, Julian Padget, George Stothart, Elizabeth Coulthard

Human trust is critical for trustworthy AI adoption. Trust is commonly understood as an attitude, but we cannot accurately measure this, nor manage it. We conflate trust in the overall system, ML, and ML's component parts; so most users do not understand the leap of faith they take when they trust ML. Current efforts to build trust explain ML's process, which can be hard for non-ML experts to comprehend because it is complex, and explanations are unrelated to their own (unarticulated) mental models. We propose an innovative way of directly building intrinsic trust in ML, by discerning and measuring the Leap of Faith (LoF) taken when a user trusts ML. Our LoF matrix identifies where an ML model aligns to a user's own mental model. This match is rigorously yet practically identified by feeding the user's data and objective function both into an ML model and an expert-validated rules-based AI model, a verified point of reference that can be tested a priori against a user's own mental model. The LoF matrix visually contrasts the models' outputs, so the remaining ML-reasoning leap of faith can be discerned. Our proposed trust metrics measure for the first time whether users demonstrate trust through their actions, and we link deserved trust to outcomes. Our contribution is significant because it enables empirical assessment and management of ML trust drivers, to support trustworthy ML adoption. Our approach is illustrated with a long-term high-stakes field study: a 3-month pilot of a sleep-improvement system with embedded AI.

Submitted: Jul 17, 2024