Social Medium Regulation

Social media regulation research focuses on mitigating the societal risks stemming from algorithmic content dissemination and promoting responsible AI development. Current efforts involve developing evaluation platforms to assess algorithmic biases and harms, employing natural language processing (NLP) techniques like large language models (LLMs) for automated regulatory compliance checks and interpretable decision-making, and exploring federated learning methods to address data privacy concerns. This interdisciplinary field is crucial for establishing effective governance frameworks, ensuring transparency and accountability in AI systems, and ultimately shaping the future of online interactions and technological innovation.

Papers