AI Regulation

AI regulation is a rapidly evolving field aiming to establish frameworks for the responsible development and deployment of artificial intelligence systems, balancing innovation with safety and ethical considerations. Current research focuses on risk assessment methodologies, including impact assessment reports and liability frameworks, as well as the challenges posed by novel architectures like large generative models and federated learning, and how to ensure their trustworthiness and compliance with regulations. These efforts are crucial for mitigating potential harms associated with AI, fostering public trust, and shaping the future of AI development and application across various sectors.

Papers