Ethical Risk

Ethical risk in artificial intelligence, particularly concerning large language models (LLMs) and assistive robots, focuses on mitigating biases, ensuring fairness, accountability, and transparency in their design and deployment. Current research emphasizes developing robust evaluation benchmarks tailored to diverse cultural values and exploring methods like ethical hazard analysis to proactively identify and address potential harms. This work is crucial for building trust in AI systems and promoting responsible innovation, impacting both the development of ethical guidelines and the safe integration of AI into various societal applications.

Papers