Blame Assignment
Blame assignment, the process of determining responsibility for negative outcomes, is a crucial area of research across diverse fields, from multi-agent systems to explainable AI. Current research focuses on developing methods for fairly allocating blame in complex systems, including novel algorithms for credit assignment in multi-agent settings and improved evaluation metrics for assessing the accuracy and robustness of explainable AI models. These efforts aim to enhance the reliability and trustworthiness of AI systems and improve the design of collaborative systems by providing better tools for understanding and mitigating errors.
Papers
May 7, 2024
October 5, 2023
April 5, 2023