Ethical Reasoning
Ethical reasoning in artificial intelligence (AI) focuses on developing systems capable of making morally sound decisions, aligning AI behavior with human values, and ensuring transparency and accountability. Current research emphasizes integrating diverse ethical frameworks (deontology, consequentialism, virtue ethics) into large language models (LLMs) and other AI architectures, often using neuro-symbolic methods to improve explanation and consistency across different languages and cultural contexts. This field is crucial for mitigating potential harms from AI systems and establishing responsible AI development practices, impacting fields ranging from healthcare and law enforcement to autonomous vehicles.
Papers
October 26, 2024
April 29, 2024
February 1, 2024
October 11, 2023
May 2, 2023
December 13, 2022
September 7, 2022
July 20, 2022
June 23, 2022
April 15, 2022