Moral Reasoning
Moral reasoning research focuses on understanding how humans and artificial intelligence (AI) make ethical judgments and the factors influencing these decisions. Current research utilizes large language models (LLMs) to analyze moral content in text, benchmark AI's moral reasoning capabilities against human standards, and develop methods to improve AI's ethical decision-making through techniques like fine-tuning and knowledge augmentation. This work is crucial for mitigating biases in AI systems and ensuring their responsible development and deployment across various applications, from legal analysis to ethical decision support systems.
Papers
January 9, 2024
November 16, 2023
October 30, 2023
October 11, 2023
October 2, 2023
September 23, 2023
June 23, 2023
April 4, 2023
October 13, 2022
October 4, 2022
August 10, 2022
May 3, 2022
April 15, 2022
March 29, 2022
January 19, 2022