Real World Autonomous
Real-world autonomous systems aim to develop vehicles capable of navigating and operating in complex, unpredictable environments without human intervention. Current research heavily focuses on improving perception and planning capabilities, employing techniques like end-to-end learning, behavior cloning, hybrid imitation learning, and reinforcement learning with world models, often utilizing convolutional neural networks and transformers. These advancements are crucial for enhancing safety, efficiency, and reliability in autonomous driving, impacting both the scientific understanding of complex AI systems and the development of practical self-driving technologies.
Papers
Generalizing Cooperative Eco-driving via Multi-residual Task Learning
Vindula Jayawardana, Sirui Li, Cathy Wu, Yashar Farid, Kentaro Oguchi
Towards learning-based planning:The nuPlan benchmark for real-world autonomous driving
Napat Karnchanachari, Dimitris Geromichalos, Kok Seang Tan, Nanxiang Li, Christopher Eriksen, Shakiba Yaghoubi, Noushin Mehdipour, Gianmarco Bernasconi, Whye Kit Fong, Yiluan Guo, Holger Caesar
3D Object Visibility Prediction in Autonomous Driving
Chuanyu Luo, Nuo Cheng, Ren Zhong, Haipeng Jiang, Wenyu Chen, Aoli Wang, Pu Li
Multi-task Learning for Real-time Autonomous Driving Leveraging Task-adaptive Attention Generator
Wonhyeok Choi, Mingyu Shin, Hyukzae Lee, Jaehoon Cho, Jaehyeon Park, Sunghoon Im