Moral Reasoning

Moral reasoning research focuses on understanding how humans and artificial intelligence (AI) make ethical judgments and the factors influencing these decisions. Current research utilizes large language models (LLMs) to analyze moral content in text, benchmark AI's moral reasoning capabilities against human standards, and develop methods to improve AI's ethical decision-making through techniques like fine-tuning and knowledge augmentation. This work is crucial for mitigating biases in AI systems and ensuring their responsible development and deployment across various applications, from legal analysis to ethical decision support systems.

Papers