Legal Autonomy
Legal autonomy in artificial intelligence focuses on enabling AI agents to operate lawfully and responsibly, primarily by either constraining AI actors or limiting AI's environmental impact. Current research emphasizes developing frameworks for autonomous systems across diverse applications (e.g., robotics, autonomous vehicles, mental health support), often employing machine learning models like Bayesian networks, deep reinforcement learning, and large language models (LLMs) to achieve adaptable and explainable behavior. This research is crucial for ensuring the safe and ethical deployment of increasingly autonomous systems, impacting fields ranging from manufacturing and transportation to healthcare and space exploration.
Papers
Hermes: A Large Language Model Framework on the Journey to Autonomous Networks
Fadhel Ayed, Ali Maatouk, Nicola Piovesan, Antonio De Domenico, Merouane Debbah, Zhi-Quan Luo
Hardware-in-the-Loop for Characterization of Embedded State Estimation for Flying Microrobots
Aryan Naveen, Jalil Morris, Christian Chan, Daniel Mhrous, E.Farrell Helbling, Nak-Seung Patrick Hyun, Gage Hills, Robert J. Wood
Digital Twin for Autonomous Surface Vessels: Enabler for Safe Maritime Navigation
Daniel Menges, Adil Rasheed
Safety Verification for Evasive Collision Avoidance in Autonomous Vehicles with Enhanced Resolutions
Aliasghar Arab, Milad Khaleghi, Alireza Partovi, Alireza Abbaspour, Chaitanya Shinde, Yashar Mousavi, Vahid Azimi, Ali Karimmoddini
Integrating Reinforcement Learning with Foundation Models for Autonomous Robotics: Methods and Perspectives
Angelo Moroncelli, Vishal Soni, Asad Ali Shahid, Marco Maccarini, Marco Forgione, Dario Piga, Blerina Spahiu, Loris Roveda
Generalizing Motion Planners with Mixture of Experts for Autonomous Driving
Qiao Sun, Huimin Wang, Jiahao Zhan, Fan Nie, Xin Wen, Leimeng Xu, Kun Zhan, Peng Jia, Xianpeng Lang, Hang Zhao