Legal Autonomy
Legal autonomy in artificial intelligence focuses on enabling AI agents to operate lawfully and responsibly, primarily by either constraining AI actors or limiting AI's environmental impact. Current research emphasizes developing frameworks for autonomous systems across diverse applications (e.g., robotics, autonomous vehicles, mental health support), often employing machine learning models like Bayesian networks, deep reinforcement learning, and large language models (LLMs) to achieve adaptable and explainable behavior. This research is crucial for ensuring the safe and ethical deployment of increasingly autonomous systems, impacting fields ranging from manufacturing and transportation to healthcare and space exploration.
Papers
Risks from Language Models for Automated Mental Healthcare: Ethics and Structure for Implementation
Declan Grabb, Max Lamparth, Nina Vasan
Imitation Game: A Model-based and Imitation Learning Deep Reinforcement Learning Hybrid
Eric MSP Veith, Torben Logemann, Aleksandr Berezin, Arlena Wellßow, Stephan Balduin
Prioritizing Safeguarding Over Autonomy: Risks of LLM Agents for Science
Xiangru Tang, Qiao Jin, Kunlun Zhu, Tongxin Yuan, Yichi Zhang, Wangchunshu Zhou, Meng Qu, Yilun Zhao, Jian Tang, Zhuosheng Zhang, Arman Cohan, Zhiyong Lu, Mark Gerstein
Explaining Autonomy: Enhancing Human-Robot Interaction through Explanation Generation with Large Language Models
David Sobrín-Hidalgo, Miguel A. González-Santamarta, Ángel M. Guerrero-Higueras, Francisco J. Rodríguez-Lera, Vicente Matellán-Olivera