Medical Safety

Medical safety research is intensely focused on mitigating risks associated with emerging technologies, particularly AI, in healthcare. Current efforts involve developing and evaluating methods to improve the accuracy and reliability of AI models, such as large language models (LLMs), for tasks like drug safety monitoring and clinical decision support, often incorporating "guardrails" to prevent errors and hallucinations. This includes the development of benchmark datasets to systematically assess and improve the medical safety of these models, as well as real-time systems for monitoring procedures like PPE donning and doffing. These advancements aim to enhance patient and healthcare worker safety, ultimately improving the quality and reliability of healthcare delivery.

Papers