Driving Scene
Driving scene understanding is a crucial area of research in autonomous driving, aiming to accurately perceive and interpret the environment surrounding a vehicle for safe and efficient navigation. Current research focuses on robust semantic segmentation, even under adverse weather or unstructured traffic conditions, leveraging advanced models like diffusion networks and neural fields for realistic scene generation and 3D reconstruction from various sensor modalities (cameras, LiDAR, radar). These advancements are vital for improving the safety and reliability of autonomous vehicles, enabling more accurate perception, planning, and decision-making capabilities.
Papers
Doe-1: Closed-Loop Autonomous Driving with Large World Model
Wenzhao Zheng, Zetian Xia, Yuanhui Huang, Sicheng Zuo, Jie Zhou, Jiwen Lu
DrivingRecon: Large 4D Gaussian Reconstruction Model For Autonomous Driving
Hao Lu, Tianshuo Xu, Wenzhao Zheng, Yunpeng Zhang, Wei Zhan, Dalong Du, Masayoshi Tomizuka, Kurt Keutzer, Yingcong Chen
TopoLogic: An Interpretable Pipeline for Lane Topology Reasoning on Driving Scenes
Yanping Fu, Wenbin Liao, Xinyuan Liu, Hang xu, Yike Ma, Feng Dai, Yucheng Zhang
MagicDrive3D: Controllable 3D Generation for Any-View Rendering in Street Scenes
Ruiyuan Gao, Kai Chen, Zhihao Li, Lanqing Hong, Zhenguo Li, Qiang Xu