Decision Making
Decision-making research currently focuses on improving human-AI collaboration and developing more robust and explainable AI decision-making systems. Key areas include enhancing AI explanations to better align with human reasoning, incorporating uncertainty and context into AI models (e.g., using Bayesian methods, analogical reasoning, and hierarchical reinforcement learning), and evaluating AI decision-making performance against human benchmarks, often using novel metrics and frameworks. This work is significant for advancing both our understanding of human decision processes and for building more effective and trustworthy AI systems across diverse applications, from healthcare and finance to autonomous driving and infrastructure management.
Papers
The Role of Heuristics and Biases During Complex Choices with an AI Teammate
Nikolos Gurney, John H. Miller, David V. Pynadath
Who Should I Trust: AI or Myself? Leveraging Human and AI Correctness Likelihood to Promote Appropriate Trust in AI-Assisted Decision-Making
Shuai Ma, Ying Lei, Xinru Wang, Chengbo Zheng, Chuhan Shi, Ming Yin, Xiaojuan Ma