Trust Calibration

Trust calibration in human-agent interaction focuses on aligning an artificial intelligence system's expressed confidence with its actual accuracy, crucial for fostering effective collaboration and user acceptance. Current research emphasizes understanding how factors like system transparency, performance feedback, and user pre-existing biases influence trust formation and calibration, often employing experimental designs with human participants interacting with AI systems in various contexts (e.g., robotics, autonomous driving, visual analytics). This work is significant because well-calibrated trust is essential for safe and beneficial deployment of AI across numerous applications, impacting fields from human-robot teamwork to the adoption of autonomous vehicles.

Papers