Ethical Decision Making
Ethical decision-making in artificial intelligence (AI) focuses on developing algorithms and systems that make choices aligned with human values, addressing biases and ensuring fairness. Current research emphasizes integrating ethical considerations throughout the AI lifecycle, from data collection and model training to deployment and oversight, employing techniques like multi-stakeholder alignment frameworks and behavior trees to enhance moral reasoning in models such as Large Language Models (LLMs). This field is crucial for mitigating potential harms from AI systems in various applications, including autonomous vehicles and decision-support tools, and for fostering responsible AI development.
Papers
October 10, 2024
September 25, 2024
September 17, 2024
August 7, 2024
May 21, 2024
May 17, 2024
May 10, 2024
April 3, 2024
January 14, 2024
September 12, 2023
August 2, 2023
May 20, 2023
January 13, 2023
December 16, 2022
June 21, 2022
June 1, 2022
March 11, 2022