Paper ID: 2304.08447

RadarFormer: Lightweight and Accurate Real-Time Radar Object Detection Model

Yahia Dalbah, Jean Lahoud, Hisham Cholakkal

The performance of perception systems developed for autonomous driving vehicles has seen significant improvements over the last few years. This improvement was associated with the increasing use of LiDAR sensors and point cloud data to facilitate the task of object detection and recognition in autonomous driving. However, LiDAR and camera systems show deteriorating performances when used in unfavorable conditions like dusty and rainy weather. Radars on the other hand operate on relatively longer wavelengths which allows for much more robust measurements in these conditions. Despite that, radar-centric data sets do not get a lot of attention in the development of deep learning techniques for radar perception. In this work, we consider the radar object detection problem, in which the radar frequency data is the only input into the detection framework. We further investigate the challenges of using radar-only data in deep learning models. We propose a transformers-based model, named RadarFormer, that utilizes state-of-the-art developments in vision deep learning. Our model also introduces a channel-chirp-time merging module that reduces the size and complexity of our models by more than 10 times without compromising accuracy. Comprehensive experiments on the CRUW radar dataset demonstrate the advantages of the proposed method. Our RadarFormer performs favorably against the state-of-the-art methods while being 2x faster during inference and requiring only one-tenth of their model parameters. The code associated with this paper is available at https://github.com/YahiDar/RadarFormer.

Submitted: Apr 17, 2023