Normative System
Normative systems research explores how artificial intelligence (AI) can learn, adapt to, and enforce rules and norms within multi-agent environments. Current research focuses on developing agent architectures incorporating normative modules that enable cooperation through equilibrium selection and sanctioning mechanisms, often utilizing large language models and Bayesian methods for norm induction and enforcement. This work is crucial for building trustworthy and ethically aligned AI systems, addressing concerns about bias, fairness, and the potential for misuse, and improving the design of AI systems that interact responsibly with humans and other agents.
Papers
June 11, 2024
May 29, 2024
March 22, 2024
March 10, 2024
February 20, 2024
January 29, 2024
December 23, 2023
October 24, 2023
September 29, 2023
April 12, 2023
February 20, 2023
September 21, 2022
September 12, 2022
July 26, 2022
July 1, 2022
January 12, 2022
November 27, 2021