AI System
AI systems are rapidly evolving, prompting intense research into their safety, reliability, and societal impact. Current research focuses on mitigating risks through improved model explainability and interpretability, developing robust auditing and verification methods, and establishing clear liability frameworks. This work spans various model architectures, including large language models and embodied agents, and addresses crucial challenges in fairness, bias, and user trust, with implications for both scientific understanding and the responsible deployment of AI in diverse applications.
Papers
Broadening the perspective for sustainable AI: Comprehensive sustainability criteria and indicators for AI systems
Friederike Rohde, Josephin Wagner, Andreas Meyer, Philipp Reinhard, Marcus Voss, Ulrich Petschow, Anne Mollen
Towards Regulatable AI Systems: Technical Gaps and Policy Opportunities
Xudong Shen, Hannah Brown, Jiashu Tao, Martin Strobel, Yao Tong, Akshay Narayan, Harold Soh, Finale Doshi-Velez