Human Moral
Research on human morality is increasingly leveraging large language models (LLMs) to understand and potentially replicate human moral reasoning, focusing on how these models align with human judgments across diverse cultures and contexts. Current studies analyze LLM performance on moral dilemmas, examining consistency across different moral frameworks and identifying biases reflecting the training data. This work is crucial for developing ethical AI systems and for gaining a deeper understanding of the cognitive processes underlying human moral decision-making, with implications for fields ranging from AI safety to social psychology.
Papers
November 11, 2024
July 2, 2024
May 17, 2024
April 3, 2024
October 30, 2023
October 24, 2023
October 23, 2023
August 29, 2023
October 4, 2022
September 2, 2022
May 25, 2022
January 19, 2022