Legal Autonomy
Legal autonomy in artificial intelligence focuses on enabling AI agents to operate lawfully and responsibly, primarily by either constraining AI actors or limiting AI's environmental impact. Current research emphasizes developing frameworks for autonomous systems across diverse applications (e.g., robotics, autonomous vehicles, mental health support), often employing machine learning models like Bayesian networks, deep reinforcement learning, and large language models (LLMs) to achieve adaptable and explainable behavior. This research is crucial for ensuring the safe and ethical deployment of increasingly autonomous systems, impacting fields ranging from manufacturing and transportation to healthcare and space exploration.
90papers
Papers
April 3, 2025
March 31, 2025
SACA: A Scenario-Aware Collision Avoidance Framework for Autonomous Vehicles Integrating LLMs-Driven Reasoning
Pro-Routing: Proactive Routing of Autonomous Multi-Capacity Robots for Pickup-and-Delivery Tasks
Are clinicians ethically obligated to disclose their use of medical machine learning systems to patients?
Video-based Traffic Light Recognition by Rockchip RV1126 for Autonomous Driving
Get the Agents Drunk: Memory Perturbations in Autonomous Agent-based Recommender Systems
A Survey of Reinforcement Learning-Based Motion Planning for Autonomous Driving: Lessons Learned from a Driving Task Perspective
March 30, 2025
March 28, 2025
March 27, 2025
Safeguarding Autonomy: a Focus on Machine Learning Decision Systems
Beyond Omakase: Designing Shared Control for Navigation Robots with Blind People
Fine-Grained Evaluation of Large Vision-Language Models in Autonomous Driving
Towards Generating Realistic 3D Semantic Training Data for Autonomous Driving
March 21, 2025