Ethical Decision Making

Ethical decision-making in artificial intelligence (AI) focuses on developing algorithms and systems that make choices aligned with human values, addressing biases and ensuring fairness. Current research emphasizes integrating ethical considerations throughout the AI lifecycle, from data collection and model training to deployment and oversight, employing techniques like multi-stakeholder alignment frameworks and behavior trees to enhance moral reasoning in models such as Large Language Models (LLMs). This field is crucial for mitigating potential harms from AI systems in various applications, including autonomous vehicles and decision-support tools, and for fostering responsible AI development.

Papers