Social Alignment
Social alignment in AI focuses on ensuring that artificial intelligence systems' actions align with societal values and goals, addressing both the direct goals of their operators and broader societal impacts. Current research explores methods like prompt engineering and simulated social interactions to train models that better reflect societal norms, often employing graph-based approaches to analyze and leverage social context for improved alignment. This research is crucial for mitigating potential harms from AI systems and developing more responsible and beneficial AI technologies, impacting fields ranging from fake news detection to multi-agent systems.
Papers
September 28, 2023
May 26, 2023
April 14, 2023
July 13, 2022
May 9, 2022