Normative System

Normative systems research explores how artificial intelligence (AI) can learn, adapt to, and enforce rules and norms within multi-agent environments. Current research focuses on developing agent architectures incorporating normative modules that enable cooperation through equilibrium selection and sanctioning mechanisms, often utilizing large language models and Bayesian methods for norm induction and enforcement. This work is crucial for building trustworthy and ethically aligned AI systems, addressing concerns about bias, fairness, and the potential for misuse, and improving the design of AI systems that interact responsibly with humans and other agents.

Papers