Paper ID: 2410.01808

AI Horizon Scanning, White Paper p3395, IEEE-SA. Part I: Areas of Attention

Marina Cortês, Andrew R. Liddle, Christos Emmanouilidis, Anthony E. Kelly, Ken Matusow, Ragu Ragunathan, Jayne M. Suess, George Tambouratzis, Janusz Zalewski, David A. Bray

Generative Artificial Intelligence (AI) models may carry societal transformation to an extent demanding a delicate balance between opportunity and risk. This manuscript is the first of a series of White Papers informing the development of IEEE-SA's p3995: `Standard for the Implementation of Safeguards, Controls, and Preventive Techniques for Artificial Intelligence (AI) Models', Chair: Marina Cortês (this https URL). In this first horizon-scanning we identify key attention areas for standards activities in AI. We examine different principles for regulatory efforts, and review notions of accountability, privacy, data rights and mis-use. As a safeguards standard we devote significant attention to the stability of global infrastructures and consider a possible overdependence on cloud computing that may result from densely coupled AI components. We review the recent cascade-failure-like Crowdstrike event in July 2024, as an illustration of potential impacts on critical infrastructures from AI-induced incidents in the (near) future. It is the first of a set of articles intended as White Papers informing the audience on the standard development. Upcoming articles will focus on regulatory initiatives, technology evolution and the role of AI in specific domains.

Submitted: Sep 13, 2024