Safety Critical Domain
Safety-critical domains encompass applications where system failures can have severe consequences, demanding high reliability and trustworthiness. Current research focuses on improving the safety and reliability of AI-powered systems within these domains, particularly addressing challenges posed by the inherent uncertainties of machine learning models like deep neural networks and large language models. Key areas of investigation include developing robust training datasets, incorporating domain knowledge to guide safe exploration, and implementing verification and shielding techniques to mitigate risks. This work is crucial for enabling the safe and responsible deployment of AI in high-stakes applications such as aviation, healthcare, and autonomous driving.
Papers
Cross-Modality Safety Alignment
Siyin Wang, Xingsong Ye, Qinyuan Cheng, Junwen Duan, Shimin Li, Jinlan Fu, Xipeng Qiu, Xuanjing Huang
Towards Robust Training Datasets for Machine Learning with Ontologies: A Case Study for Emergency Road Vehicle Detection
Lynn Vonderhaar, Timothy Elvira, Tyler Procko, Omar Ochoa
Verification-Guided Shielding for Deep Reinforcement Learning
Davide Corsi, Guy Amir, Andoni Rodriguez, Cesar Sanchez, Guy Katz, Roy Fox
Towards a Personal Health Large Language Model
Justin Cosentino, Anastasiya Belyaeva, Xin Liu, Nicholas A. Furlotte, Zhun Yang, Chace Lee, Erik Schenck, Yojan Patel, Jian Cui, Logan Douglas Schneider, Robby Bryant, Ryan G. Gomes, Allen Jiang, Roy Lee, Yun Liu, Javier Perez, Jameson K. Rogers, Cathy Speed, Shyam Tailor, Megan Walker, Jeffrey Yu, Tim Althoff, Conor Heneghan, John Hernandez, Mark Malhotra, Leor Stern, Yossi Matias, Greg S. Corrado, Shwetak Patel, Shravya Shetty, Jiening Zhan, Shruthi Prabhakara, Daniel McDuff, Cory Y. McLean