Autonomous Driving Model
Autonomous driving models aim to create systems capable of safely and reliably navigating vehicles without human intervention. Current research heavily focuses on improving model robustness and interpretability, often employing end-to-end learning architectures, reinforcement learning algorithms (including RLHF), and transformer networks for perception and planning. Key challenges include addressing adversarial attacks, generating realistic and diverse training data (including long videos and safety-critical scenarios), and ensuring compliance with traffic regulations and safety standards. These advancements are crucial for advancing the safety and reliability of autonomous vehicles, ultimately impacting transportation systems and urban mobility.
Papers
PKRD-CoT: A Unified Chain-of-thought Prompting for Multi-Modal Large Language Models in Autonomous Driving
Xuewen Luo, Fan Ding, Yinsheng Song, Xiaofeng Zhang, Junnyong Loo
Quantization-Aware Imitation-Learning for Resource-Efficient Robotic Control
Seongmin Park, Hyungmin Kim, Wonseok Jeon, Juyoung Yang, Byeongwook Jeon, Yoonseon Oh, Jungwook Choi