Moral Agent
Creating artificial moral agents (AMAs) focuses on designing AI systems capable of making ethically sound decisions. Current research explores various approaches, including reinforcement learning architectures augmented with normative reason-based decision-making, and investigates the limitations imposed by computational intractability. Key challenges involve defining and implementing ethical frameworks, addressing moral heterogeneity within agent populations, and ensuring sufficient interpretability for trust and accountability. This field is crucial for the safe and responsible deployment of AI in high-stakes applications, impacting both the development of ethical AI guidelines and the broader understanding of moral reasoning itself.
Papers
October 29, 2024
October 28, 2024
September 23, 2024
July 24, 2024
March 24, 2024
March 7, 2024
October 12, 2023
August 9, 2023
July 2, 2023
May 29, 2023
May 19, 2023
February 17, 2023
January 20, 2023
October 6, 2022
August 30, 2022
July 20, 2022