Moral Machine
Moral Machine research explores how artificial intelligence (AI) systems, particularly large language models (LLMs), make ethical decisions in challenging scenarios, often using variations of the "trolley problem." Current research focuses on comparing AI moral judgments across different languages and cultures, analyzing the alignment of AI preferences with human values, and developing methods to evaluate and improve the ethical reasoning of AI. This work is crucial for ensuring responsible AI development and deployment, particularly in high-stakes applications like autonomous vehicles, by identifying and mitigating biases and promoting fairness in AI decision-making.
Papers
November 11, 2024
October 9, 2024
July 21, 2024
July 2, 2024
April 3, 2024
December 29, 2023
September 12, 2023
August 28, 2023
May 27, 2023
February 8, 2023
December 21, 2022