Norm Violation

Norm violation research focuses on understanding how agents (individuals or AI systems) identify, respond to, and resolve breaches of social norms within multi-agent systems. Current research employs various approaches, including agent-based modeling, deontic logic, and large language models (LLMs) like ChatGPT, to analyze norm violation detection, sanctioning, and conflict resolution. This work is significant for advancing our understanding of social dynamics, improving the design of robust and ethical AI systems, and developing effective strategies for managing conflict and promoting cooperation in both online and offline environments. The development of benchmarks and datasets, such as the ReNoVi corpus, is also a key area of focus, enabling more rigorous evaluation of different approaches.

Papers