Moral Reasoning
Moral reasoning research focuses on understanding how humans and artificial intelligence (AI) make ethical judgments and the factors influencing these decisions. Current research utilizes large language models (LLMs) to analyze moral content in text, benchmark AI's moral reasoning capabilities against human standards, and develop methods to improve AI's ethical decision-making through techniques like fine-tuning and knowledge augmentation. This work is crucial for mitigating biases in AI systems and ensuring their responsible development and deployment across various applications, from legal analysis to ethical decision support systems.
Papers
November 14, 2024
November 6, 2024
October 12, 2024
October 10, 2024
September 13, 2024
August 20, 2024
June 27, 2024
June 6, 2024
June 5, 2024
May 23, 2024
May 21, 2024
May 9, 2024
April 3, 2024
April 2, 2024
February 3, 2024
January 9, 2024
November 16, 2023
October 30, 2023