AI Capability

Research on AI capabilities centers on understanding and mitigating the risks associated with increasingly autonomous and powerful AI systems, while simultaneously exploring methods to enhance their performance and beneficial applications. Current research focuses on improving AI safety through techniques like risk alignment, secure prompting, and structured access, as well as evaluating AI performance across diverse tasks, including social intelligence and knowledge work, often using large language models (LLMs) as a primary architecture. This work is crucial for responsible AI development, informing policy decisions, and ensuring the safe and ethical integration of AI into various sectors, from law enforcement to finance.

Papers