Autonomous Driving
Autonomous driving research aims to develop vehicles capable of navigating and operating without human intervention, prioritizing safety and efficiency. Current efforts heavily focus on improving perception (using techniques like 3D Gaussian splatting and Bird's-Eye-View representations), prediction (leveraging diffusion models, transformers, and Bayesian games to handle uncertainty), and planning (employing reinforcement learning, large language models, and hierarchical approaches for decision-making). These advancements are crucial for enhancing the reliability and safety of autonomous vehicles, with significant implications for transportation systems and the broader AI community.
Papers
StopNet: Scalable Trajectory and Occupancy Prediction for Urban Autonomous Driving
Jinkyu Kim, Reza Mahjourian, Scott Ettinger, Mayank Bansal, Brandyn White, Ben Sapp, Dragomir Anguelov
A Real-time Critical-scenario-generation Framework for Testing Autonomous Driving System
Yizhou Xie, Kunpeng Dai, Yong Zhang
Benchmarking the Robustness of LiDAR-Camera Fusion for 3D Object Detection
Kaicheng Yu, Tang Tao, Hongwei Xie, Zhiwei Lin, Zhongwei Wu, Zhongyu Xia, Tingting Liang, Haiyang Sun, Jiong Deng, Dayang Hao, Yongtao Wang, Xiaodan Liang, Bing Wang
Time3D: End-to-End Joint Monocular 3D Object Detection and Tracking for Autonomous Driving
Peixuan Li, Jieyu Jin
OpenCalib: A Multi-sensor Calibration Toolbox for Autonomous Driving
Guohang Yan, Liu Zhuochun, Chengjie Wang, Chunlei Shi, Pengjin Wei, Xinyu Cai, Tao Ma, Zhizheng Liu, Zebin Zhong, Yuqian Liu, Ming Zhao, Zheng Ma, Yikang Li
Iso-Dream: Isolating and Leveraging Noncontrollable Visual Dynamics in World Models
Minting Pan, Xiangming Zhu, Yunbo Wang, Xiaokang Yang
Learning to Drive Using Sparse Imitation Reinforcement Learning
Yuci Han, Alper Yilmaz
Real-Time Trajectory Planning for Autonomous Driving with Gaussian Process and Incremental Refinement
Cheng Jie, Chen Yingbing, Zhang Qingwen, Gan Lu, Liu Ming
Collaborative 3D Object Detection for Automatic Vehicle Systems via Learnable Communications
Junyong Wang, Yuan Zeng, Yi Gong
Image-Based Conditioning for Action Policy Smoothness in Autonomous Miniature Car Racing with Reinforcement Learning
Bo-Jiun Hsu, Hoang-Giang Cao, I Lee, Chih-Yu Kao, Jin-Bo Huang, I-Chen Wu
Leveraging Dynamic Objects for Relative Localization Correction in a Connected Autonomous Vehicle Network
Yunshuang Yuan, Monika Sester
TC-Driver: Trajectory Conditioned Driving for Robust Autonomous Racing -- A Reinforcement Learning Approach
Edoardo Ghignone, Nicolas Baumann, Mike Boss, Michele Magno
Visual Attention-based Self-supervised Absolute Depth Estimation using Geometric Priors in Autonomous Driving
Jie Xiang, Yun Wang, Lifeng An, Haiyang Liu, Zijun Wang, Jian Liu
CARNet: A Dynamic Autoencoder for Learning Latent Dynamics in Autonomous Driving Tasks
Andrey Pak, Hemanth Manjunatha, Dimitar Filev, Panagiotis Tsiotras