Moral Dilemma
Moral dilemmas, situations with conflicting ethical choices, are a central focus in evaluating the ethical implications of large language models (LLMs). Current research investigates how LLMs handle these dilemmas, analyzing their responses through various ethical frameworks and assessing biases related to gender, ethnicity, and cultural norms. This work is crucial for developing more responsible and equitable AI systems, impacting both the design of future models and the ethical guidelines governing their deployment in real-world applications. The goal is to move beyond simply measuring accuracy to understanding and mitigating the inherent biases and limitations of LLMs in navigating complex moral choices.
Papers
July 26, 2023
June 25, 2023
June 20, 2023
June 1, 2023
February 21, 2023
December 20, 2022
August 12, 2022
May 12, 2022