Moral Development
Moral development in artificial intelligence focuses on imbuing large language models (LLMs) with ethical reasoning and decision-making capabilities, aiming to mitigate biases and harmful outputs. Current research emphasizes developing benchmarks and algorithms to evaluate and improve LLMs' moral reasoning, including approaches that model human moral progress and incorporate contextual understanding of ethical dilemmas. This work is crucial for ensuring the responsible development and deployment of AI systems, impacting both the trustworthiness of AI and its integration into society.
Papers
July 21, 2024
June 28, 2024
June 6, 2024
May 23, 2024
May 17, 2024
September 23, 2023
May 29, 2023
December 20, 2022