Decision Making
Decision-making research currently focuses on improving human-AI collaboration and developing more robust and explainable AI decision-making systems. Key areas include enhancing AI explanations to better align with human reasoning, incorporating uncertainty and context into AI models (e.g., using Bayesian methods, analogical reasoning, and hierarchical reinforcement learning), and evaluating AI decision-making performance against human benchmarks, often using novel metrics and frameworks. This work is significant for advancing both our understanding of human decision processes and for building more effective and trustworthy AI systems across diverse applications, from healthcare and finance to autonomous driving and infrastructure management.
Papers
Relative Value Biases in Large Language Models
William M. Hayes, Nicolas Yax, Stefano Palminteri
True Knowledge Comes from Practice: Aligning LLMs with Embodied Environments via Reinforcement Learning
Weihao Tan, Wentao Zhang, Shanqi Liu, Longtao Zheng, Xinrun Wang, Bo An
A2C: A Modular Multi-stage Collaborative Decision Framework for Human-AI Teams
Shahroz Tariq, Mohan Baruwal Chhetri, Surya Nepal, Cecile Paris