Artificial Intelligence Decision

Artificial intelligence (AI) decision-making research centers on improving the transparency, trustworthiness, and alignment of AI systems with human values and expectations. Current efforts focus on developing explainable AI (XAI) methods, including techniques to generate human-understandable narratives from model outputs and to identify discrepancies between AI and human judgments, often using reinforcement learning and generative models. This work is crucial for building responsible AI systems, addressing concerns about bias and fairness, and fostering trust in AI's role in high-stakes applications across diverse fields, from smart homes to earth system science.

Papers