Learning Norm
Learning norms, encompassing the identification, representation, and application of rules and behavioral expectations, is a burgeoning field focusing on how agents (humans or AI) acquire and utilize these norms in various contexts. Current research emphasizes the development of computational models and algorithms, including those based on large language models and Bayesian methods, to extract norms from data (e.g., text, conversations, agent interactions), assess their impact on AI systems (e.g., bias detection, fairness), and design agents capable of learning and adhering to norms for improved cooperation and safety. This research is crucial for building more trustworthy and socially responsible AI systems and for understanding the complex interplay between norms, human behavior, and societal structures.