Artificial Intelligence Decision
Artificial intelligence (AI) decision-making research centers on improving the transparency, trustworthiness, and alignment of AI systems with human values and expectations. Current efforts focus on developing explainable AI (XAI) methods, including techniques to generate human-understandable narratives from model outputs and to identify discrepancies between AI and human judgments, often using reinforcement learning and generative models. This work is crucial for building responsible AI systems, addressing concerns about bias and fairness, and fostering trust in AI's role in high-stakes applications across diverse fields, from smart homes to earth system science.
Papers
October 14, 2024
September 19, 2024
July 3, 2024
June 12, 2024
April 23, 2024
April 19, 2024
March 22, 2024
March 1, 2024
January 27, 2024
December 23, 2023
September 29, 2023
September 20, 2023
June 28, 2023
June 24, 2023
May 6, 2023
February 11, 2023
December 2, 2022