Legal Autonomy
Legal autonomy in artificial intelligence focuses on enabling AI agents to operate lawfully and responsibly, primarily by either constraining AI actors or limiting AI's environmental impact. Current research emphasizes developing frameworks for autonomous systems across diverse applications (e.g., robotics, autonomous vehicles, mental health support), often employing machine learning models like Bayesian networks, deep reinforcement learning, and large language models (LLMs) to achieve adaptable and explainable behavior. This research is crucial for ensuring the safe and ethical deployment of increasingly autonomous systems, impacting fields ranging from manufacturing and transportation to healthcare and space exploration.
Papers
Enhancing scientific exploration of the deep sea through shared autonomy in remote manipulation
Amy Phung, Gideon Billings, Andrea F. Daniele, Matthew R. Walter, Richard Camilli
Autonomous and Human-Driven Vehicles Interacting in a Roundabout: A Quantitative and Qualitative Evaluation
Laura Ferrarotti, Massimiliano Luca, Gabriele Santin, Giorgio Previati, Gianpiero Mastinu, Massimiliano Gobbi, Elena Campi, Lorenzo Uccello, Antonino Albanese, Praveen Zalaya, Alessandro Roccasalva, Bruno Lepri
Agilicious: Open-Source and Open-Hardware Agile Quadrotor for Vision-Based Flight
Philipp Foehn, Elia Kaufmann, Angel Romero, Robert Penicka, Sihao Sun, Leonard Bauersfeld, Thomas Laengle, Giovanni Cioffi, Yunlong Song, Antonio Loquercio, Davide Scaramuzza
Autonomous and Ubiquitous In-node Learning Algorithms of Active Directed Graphs and Its Storage Behavior
Hui Wei, Weihua Miao, Fushun Li