Legal Autonomy
Legal autonomy in artificial intelligence focuses on enabling AI agents to operate lawfully and responsibly, primarily by either constraining AI actors or limiting AI's environmental impact. Current research emphasizes developing frameworks for autonomous systems across diverse applications (e.g., robotics, autonomous vehicles, mental health support), often employing machine learning models like Bayesian networks, deep reinforcement learning, and large language models (LLMs) to achieve adaptable and explainable behavior. This research is crucial for ensuring the safe and ethical deployment of increasingly autonomous systems, impacting fields ranging from manufacturing and transportation to healthcare and space exploration.
Papers
Bayesian Data Augmentation and Training for Perception DNN in Autonomous Aerial Vehicles
Ashik E Rasul, Humaira Tasnim, Hyung-Jin Yoon, Ayoosh Bansal, Duo Wang, Naira Hovakimyan, Lui Sha, Petros Voulgaris
ITPNet: Towards Instantaneous Trajectory Prediction for Autonomous Driving
Rongqing Li, Changsheng Li, Yuhang Li, Hanjie Li, Yi Chen, Dongchun Ren, Ye Yuan, Guoren Wang
COOOL: Challenge Of Out-Of-Label A Novel Benchmark for Autonomous Driving
Ali K. AlShami, Ananya Kalita, Ryan Rabinowitz, Khang Lam, Rishabh Bezbarua, Terrance Boult, Jugal Kalita
ACT-Bench: Towards Action Controllable World Models for Autonomous Driving
Hidehisa Arai, Keishi Ishihara, Tsubasa Takahashi, Yu Yamaguchi