AI Regulation
AI regulation is a rapidly evolving field aiming to establish frameworks for the responsible development and deployment of artificial intelligence systems, balancing innovation with safety and ethical considerations. Current research focuses on risk assessment methodologies, including impact assessment reports and liability frameworks, as well as the challenges posed by novel architectures like large generative models and federated learning, and how to ensure their trustworthiness and compliance with regulations. These efforts are crucial for mitigating potential harms associated with AI, fostering public trust, and shaping the future of AI development and application across various sectors.
Papers
October 5, 2024
July 24, 2024
July 11, 2024
June 11, 2024
May 8, 2024
April 5, 2024
March 19, 2024
March 14, 2024
February 5, 2024
November 3, 2023
October 6, 2023
July 23, 2023
April 11, 2023
March 20, 2023
February 5, 2023
December 9, 2022
November 25, 2022
August 23, 2022
July 6, 2022