Driving Context
Driving context research aims to improve autonomous vehicle (AV) safety and performance by incorporating a richer understanding of the surrounding environment and driver behavior. Current research focuses on integrating diverse data sources (e.g., camera images, GPS, driver gaze, EEG) into models that predict driver actions, reconstruct driving scenes, and reason about complex scenarios, often employing deep learning architectures like convolutional neural networks, recurrent neural networks, and large language models. This work is crucial for enhancing AV decision-making, improving human-AV interaction, and ultimately leading to safer and more reliable autonomous systems.
Papers
Generalizing Motion Planners with Mixture of Experts for Autonomous Driving
Qiao Sun, Huimin Wang, Jiahao Zhan, Fan Nie, Xin Wen, Leimeng Xu, Kun Zhan, Peng Jia, Xianpeng Lang, Hang Zhao
P-YOLOv8: Efficient and Accurate Real-Time Detection of Distracted Driving
Mohamed R. Elshamy, Heba M. Emara, Mohamed R. Shoaib, Abdel-Hameed A. Badawy