Driving Scene
Driving scene understanding is a crucial area of research in autonomous driving, aiming to accurately perceive and interpret the environment surrounding a vehicle for safe and efficient navigation. Current research focuses on robust semantic segmentation, even under adverse weather or unstructured traffic conditions, leveraging advanced models like diffusion networks and neural fields for realistic scene generation and 3D reconstruction from various sensor modalities (cameras, LiDAR, radar). These advancements are vital for improving the safety and reliability of autonomous vehicles, enabling more accurate perception, planning, and decision-making capabilities.
Papers
TopoLogic: An Interpretable Pipeline for Lane Topology Reasoning on Driving Scenes
Yanping Fu, Wenbin Liao, Xinyuan Liu, Hang xu, Yike Ma, Feng Dai, Yucheng Zhang
MagicDrive3D: Controllable 3D Generation for Any-View Rendering in Street Scenes
Ruiyuan Gao, Kai Chen, Zhihao Li, Lanqing Hong, Zhenguo Li, Qiang Xu