Human SAFETY
Human safety in the context of rapidly advancing AI systems, particularly large language models (LLMs) and autonomous vehicles, is a critical research area focusing on mitigating risks associated with harmful outputs, unreliable predictions, and unforeseen interactions. Current research emphasizes developing robust safety mechanisms, including novel algorithms like Precision Knowledge Editing for LLMs and Physics-Enhanced Residual Policy Learning for autonomous vehicle control, as well as exploring multi-objective learning frameworks to balance safety and performance. These efforts are crucial for ensuring the responsible deployment of AI technologies across various sectors, ultimately improving the reliability and trustworthiness of these systems in real-world applications.
Papers
A Data-Informed Analysis of Scalable Supervision for Safety in Autonomous Vehicle Fleets
Cameron Hickert, Zhongxia Yan, Cathy Wu
Real-Time Adaptive Industrial Robots: Improving Safety And Comfort In Human-Robot Collaboration
Damian Hostettler, Simon Mayer, Jan Liam Albert, Kay Erik Jenss, Christian Hildebrand