Ethical Risk
Ethical risk in artificial intelligence, particularly concerning large language models (LLMs) and assistive robots, focuses on mitigating biases, ensuring fairness, accountability, and transparency in their design and deployment. Current research emphasizes developing robust evaluation benchmarks tailored to diverse cultural values and exploring methods like ethical hazard analysis to proactively identify and address potential harms. This work is crucial for building trust in AI systems and promoting responsible innovation, impacting both the development of ethical guidelines and the safe integration of AI into various societal applications.
Papers
November 14, 2024
October 26, 2024
July 27, 2024
July 15, 2024
June 13, 2024
June 8, 2024
December 15, 2023
May 16, 2023
April 21, 2023
December 4, 2022
October 7, 2022
October 6, 2022