Paper ID: 2408.02088
KAN-RCBEVDepth: A multi-modal fusion algorithm in object detection for autonomous driving
Zhihao Lai, Chuanhao Liu, Shihui Sheng, Zhiqiang Zhang
Accurate 3D object detection in autonomous driving is critical yet challenging due to occlusions, varying object sizes, and complex urban environments. This paper introduces the KAN-RCBEVDepth method, an innovative approach aimed at enhancing 3D object detection by fusing multimodal sensor data from cameras, LiDAR, and millimeter-wave radar. Our unique Bird's Eye View-based approach significantly improves detection accuracy and efficiency by seamlessly integrating diverse sensor inputs, refining spatial relationship understanding, and optimizing computational procedures. Experimental results show that the proposed method outperforms existing techniques across multiple detection metrics, achieving a higher Mean Distance AP (0.389, 23\% improvement), a better ND Score (0.485, 17.1\% improvement), and a faster Evaluation Time (71.28s, 8\% faster). Additionally, the KAN-RCBEVDepth method significantly reduces errors compared to BEVDepth, with lower Transformation Error (0.6044, 13.8\% improvement), Scale Error (0.2780, 2.6\% improvement), Orientation Error (0.5830, 7.6\% improvement), Velocity Error (0.4244, 28.3\% improvement), and Attribute Error (0.2129, 3.2\% improvement). These findings suggest that our method offers enhanced accuracy, reliability, and efficiency, making it well-suited for dynamic and demanding autonomous driving scenarios. The code will be released in \url{https://github.com/laitiamo/RCBEVDepth-KAN}.
Submitted: Aug 4, 2024