AI Ethic
AI ethics focuses on ensuring the responsible development and deployment of artificial intelligence systems, aiming to mitigate potential harms and promote beneficial societal impact. Current research emphasizes addressing biases in models like large language models (LLMs) and text-to-image generators, developing frameworks for fairness, transparency, and accountability, and exploring methods for aligning AI systems with human values through techniques such as reinforcement learning from human feedback and social choice theory. This rapidly evolving field is crucial for guiding the development of trustworthy AI, influencing both scientific methodology and the practical application of AI across various sectors.
Papers
AI Fairness in Practice
David Leslie, Cami Rincon, Morgan Briggs, Antonella Perini, Smera Jayadeva, Ann Borda, SJ Bennett, Christopher Burr, Mhairi Aitken, Michael Katell, Claudia Fischer, Janis Wong, Ismael Kherroubi Garcia
AI Ethics and Governance in Practice: An Introduction
David Leslie, Cami Rincon, Morgan Briggs, Antonella Perini, Smera Jayadeva, Ann Borda, SJ Bennett, Christopher Burr, Mhairi Aitken, Michael Katell, Claudia Fischer