AI System
AI systems are rapidly evolving, prompting intense research into their safety, reliability, and societal impact. Current research focuses on mitigating risks through improved model explainability and interpretability, developing robust auditing and verification methods, and establishing clear liability frameworks. This work spans various model architectures, including large language models and embodied agents, and addresses crucial challenges in fairness, bias, and user trust, with implications for both scientific understanding and the responsible deployment of AI in diverse applications.
Papers
January 25, 2022
January 18, 2022
January 11, 2022
January 9, 2022
December 21, 2021
Validation and Transparency in AI systems for pharmacovigilance: a case study applied to the medical literature monitoring of adverse events
Bruno Ohana, Jack Sullivan, Nicole Baker
Towards a Science of Human-AI Decision Making: A Survey of Empirical Studies
Vivian Lai, Chacha Chen, Q. Vera Liao, Alison Smith-Renner, Chenhao Tan
December 20, 2021
December 19, 2021
December 14, 2021
December 10, 2021
December 6, 2021
December 1, 2021
November 28, 2021
November 18, 2021
November 12, 2021
November 10, 2021
November 9, 2021
November 5, 2021
November 3, 2021