Policy Value

Policy value research focuses on aligning artificial intelligence systems, particularly large language models (LLMs), with human values and societal norms. Current research emphasizes developing robust evaluation frameworks and benchmarks to assess this alignment across diverse contexts, employing techniques like Bayesian inverse reinforcement learning and generative evolving testing, as well as exploring the use of transformer-based models for imputation of missing data in value-related datasets. This work is crucial for mitigating potential harms from AI systems and ensuring responsible development and deployment, impacting fields ranging from news recommendation to healthcare and education.

Papers